pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
listlengths
0
201
languages
listlengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
listlengths
0
722
processed_texts
listlengths
1
723
text-classification
transformers
# Cross-Encoder for Quora Duplicate Questions Detection This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data Given a question and paragraph, can the question be answered by the paragraph? The models have been trained on the [GLUE QNLI](https://arxiv.org/abs/1804.07461) dataset, which transformed the [SQuAD dataset](https://rajpurkar.github.io/SQuAD-explorer/) into an NLI task. ## Performance For performance results of this model, see [SBERT.net Pre-trained Cross-Encoder][https://www.sbert.net/docs/pretrained_cross-encoders.html]. ## Usage Pre-trained models can be used like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name') scores = model.predict([('Query1', 'Paragraph1'), ('Query2', 'Paragraph2')]) #e.g. scores = model.predict([('How many people live in Berlin?', 'Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.'), ('What is the size of New York?', 'New York City is famous for the Metropolitan Museum of Art.')]) ``` ## Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('model_name') tokenizer = AutoTokenizer.from_pretrained('model_name') features = tokenizer(['How many people live in Berlin?', 'What is the size of New York?'], ['Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = torch.nn.functional.sigmoid(model(**features).logits) print(scores) ```
{"license": "apache-2.0"}
cross-encoder/qnli-electra-base
null
[ "transformers", "pytorch", "electra", "text-classification", "arxiv:1804.07461", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1804.07461" ]
[]
TAGS #transformers #pytorch #electra #text-classification #arxiv-1804.07461 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# Cross-Encoder for Quora Duplicate Questions Detection This model was trained using SentenceTransformers Cross-Encoder class. ## Training Data Given a question and paragraph, can the question be answered by the paragraph? The models have been trained on the GLUE QNLI dataset, which transformed the SQuAD dataset into an NLI task. ## Performance For performance results of this model, see [URL Pre-trained Cross-Encoder][URL ## Usage Pre-trained models can be used like this: ## Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library):
[ "# Cross-Encoder for Quora Duplicate Questions Detection\nThis model was trained using SentenceTransformers Cross-Encoder class.", "## Training Data\nGiven a question and paragraph, can the question be answered by the paragraph? The models have been trained on the GLUE QNLI dataset, which transformed the SQuAD dataset into an NLI task.", "## Performance\nFor performance results of this model, see [URL Pre-trained Cross-Encoder][URL", "## Usage\n\nPre-trained models can be used like this:", "## Usage with Transformers AutoModel\nYou can use the model also directly with Transformers library (without SentenceTransformers library):" ]
[ "TAGS\n#transformers #pytorch #electra #text-classification #arxiv-1804.07461 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Cross-Encoder for Quora Duplicate Questions Detection\nThis model was trained using SentenceTransformers Cross-Encoder class.", "## Training Data\nGiven a question and paragraph, can the question be answered by the paragraph? The models have been trained on the GLUE QNLI dataset, which transformed the SQuAD dataset into an NLI task.", "## Performance\nFor performance results of this model, see [URL Pre-trained Cross-Encoder][URL", "## Usage\n\nPre-trained models can be used like this:", "## Usage with Transformers AutoModel\nYou can use the model also directly with Transformers library (without SentenceTransformers library):" ]
text-classification
transformers
# Cross-Encoder for Quora Duplicate Questions Detection This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data This model was trained on the [Quora Duplicate Questions](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) dataset. The model will predict a score between 0 and 1 how likely the two given questions are duplicates. Note: The model is not suitable to estimate the similarity of questions, e.g. the two questions "How to learn Java" and "How to learn Python" will result in a rahter low score, as these are not duplicates. ## Usage and Performance Pre-trained models can be used like this: ``` from sentence_transformers import CrossEncoder model = CrossEncoder('model_name') scores = model.predict([('Question 1', 'Question 2'), ('Question 3', 'Question 4')]) ``` You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class
{"license": "apache-2.0"}
cross-encoder/quora-distilroberta-base
null
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #roberta #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Cross-Encoder for Quora Duplicate Questions Detection This model was trained using SentenceTransformers Cross-Encoder class. ## Training Data This model was trained on the Quora Duplicate Questions dataset. The model will predict a score between 0 and 1 how likely the two given questions are duplicates. Note: The model is not suitable to estimate the similarity of questions, e.g. the two questions "How to learn Java" and "How to learn Python" will result in a rahter low score, as these are not duplicates. ## Usage and Performance Pre-trained models can be used like this: You can use this model also without sentence_transformers and by just using Transformers ''AutoModel'' class
[ "# Cross-Encoder for Quora Duplicate Questions Detection\nThis model was trained using SentenceTransformers Cross-Encoder class.", "## Training Data\nThis model was trained on the Quora Duplicate Questions dataset. The model will predict a score between 0 and 1 how likely the two given questions are duplicates.\n\nNote: The model is not suitable to estimate the similarity of questions, e.g. the two questions \"How to learn Java\" and \"How to learn Python\" will result in a rahter low score, as these are not duplicates.", "## Usage and Performance\n\nPre-trained models can be used like this:\n\n\nYou can use this model also without sentence_transformers and by just using Transformers ''AutoModel'' class" ]
[ "TAGS\n#transformers #pytorch #jax #roberta #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Cross-Encoder for Quora Duplicate Questions Detection\nThis model was trained using SentenceTransformers Cross-Encoder class.", "## Training Data\nThis model was trained on the Quora Duplicate Questions dataset. The model will predict a score between 0 and 1 how likely the two given questions are duplicates.\n\nNote: The model is not suitable to estimate the similarity of questions, e.g. the two questions \"How to learn Java\" and \"How to learn Python\" will result in a rahter low score, as these are not duplicates.", "## Usage and Performance\n\nPre-trained models can be used like this:\n\n\nYou can use this model also without sentence_transformers and by just using Transformers ''AutoModel'' class" ]
text-classification
transformers
# Cross-Encoder for Quora Duplicate Questions Detection This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data This model was trained on the [Quora Duplicate Questions](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) dataset. The model will predict a score between 0 and 1 how likely the two given questions are duplicates. Note: The model is not suitable to estimate the similarity of questions, e.g. the two questions "How to learn Java" and "How to learn Python" will result in a rahter low score, as these are not duplicates. ## Usage and Performance Pre-trained models can be used like this: ``` from sentence_transformers import CrossEncoder model = CrossEncoder('model_name') scores = model.predict([('Question 1', 'Question 2'), ('Question 3', 'Question 4')]) ``` You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class
{"license": "apache-2.0"}
cross-encoder/quora-roberta-base
null
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #roberta #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Cross-Encoder for Quora Duplicate Questions Detection This model was trained using SentenceTransformers Cross-Encoder class. ## Training Data This model was trained on the Quora Duplicate Questions dataset. The model will predict a score between 0 and 1 how likely the two given questions are duplicates. Note: The model is not suitable to estimate the similarity of questions, e.g. the two questions "How to learn Java" and "How to learn Python" will result in a rahter low score, as these are not duplicates. ## Usage and Performance Pre-trained models can be used like this: You can use this model also without sentence_transformers and by just using Transformers ''AutoModel'' class
[ "# Cross-Encoder for Quora Duplicate Questions Detection\nThis model was trained using SentenceTransformers Cross-Encoder class.", "## Training Data\nThis model was trained on the Quora Duplicate Questions dataset. The model will predict a score between 0 and 1 how likely the two given questions are duplicates.\n\nNote: The model is not suitable to estimate the similarity of questions, e.g. the two questions \"How to learn Java\" and \"How to learn Python\" will result in a rahter low score, as these are not duplicates.", "## Usage and Performance\n\nPre-trained models can be used like this:\n\n\nYou can use this model also without sentence_transformers and by just using Transformers ''AutoModel'' class" ]
[ "TAGS\n#transformers #pytorch #jax #roberta #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Cross-Encoder for Quora Duplicate Questions Detection\nThis model was trained using SentenceTransformers Cross-Encoder class.", "## Training Data\nThis model was trained on the Quora Duplicate Questions dataset. The model will predict a score between 0 and 1 how likely the two given questions are duplicates.\n\nNote: The model is not suitable to estimate the similarity of questions, e.g. the two questions \"How to learn Java\" and \"How to learn Python\" will result in a rahter low score, as these are not duplicates.", "## Usage and Performance\n\nPre-trained models can be used like this:\n\n\nYou can use this model also without sentence_transformers and by just using Transformers ''AutoModel'' class" ]
text-classification
transformers
# Cross-Encoder for Quora Duplicate Questions Detection This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data This model was trained on the [Quora Duplicate Questions](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) dataset. The model will predict a score between 0 and 1 how likely the two given questions are duplicates. Note: The model is not suitable to estimate the similarity of questions, e.g. the two questions "How to learn Java" and "How to learn Python" will result in a rahter low score, as these are not duplicates. ## Usage and Performance Pre-trained models can be used like this: ``` from sentence_transformers import CrossEncoder model = CrossEncoder('model_name') scores = model.predict([('Question 1', 'Question 2'), ('Question 3', 'Question 4')]) ``` You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class
{"license": "apache-2.0"}
cross-encoder/quora-roberta-large
null
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #roberta #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# Cross-Encoder for Quora Duplicate Questions Detection This model was trained using SentenceTransformers Cross-Encoder class. ## Training Data This model was trained on the Quora Duplicate Questions dataset. The model will predict a score between 0 and 1 how likely the two given questions are duplicates. Note: The model is not suitable to estimate the similarity of questions, e.g. the two questions "How to learn Java" and "How to learn Python" will result in a rahter low score, as these are not duplicates. ## Usage and Performance Pre-trained models can be used like this: You can use this model also without sentence_transformers and by just using Transformers ''AutoModel'' class
[ "# Cross-Encoder for Quora Duplicate Questions Detection\nThis model was trained using SentenceTransformers Cross-Encoder class.", "## Training Data\nThis model was trained on the Quora Duplicate Questions dataset. The model will predict a score between 0 and 1 how likely the two given questions are duplicates.\n\nNote: The model is not suitable to estimate the similarity of questions, e.g. the two questions \"How to learn Java\" and \"How to learn Python\" will result in a rahter low score, as these are not duplicates.", "## Usage and Performance\n\nPre-trained models can be used like this:\n\n\nYou can use this model also without sentence_transformers and by just using Transformers ''AutoModel'' class" ]
[ "TAGS\n#transformers #pytorch #jax #roberta #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Cross-Encoder for Quora Duplicate Questions Detection\nThis model was trained using SentenceTransformers Cross-Encoder class.", "## Training Data\nThis model was trained on the Quora Duplicate Questions dataset. The model will predict a score between 0 and 1 how likely the two given questions are duplicates.\n\nNote: The model is not suitable to estimate the similarity of questions, e.g. the two questions \"How to learn Java\" and \"How to learn Python\" will result in a rahter low score, as these are not duplicates.", "## Usage and Performance\n\nPre-trained models can be used like this:\n\n\nYou can use this model also without sentence_transformers and by just using Transformers ''AutoModel'' class" ]
text-classification
transformers
# Cross-Encoder for Quora Duplicate Questions Detection This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data This model was trained on the [STS benchmark dataset](http://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark). The model will predict a score between 0 and 1 how for the semantic similarity of two sentences. ## Usage and Performance Pre-trained models can be used like this: ``` from sentence_transformers import CrossEncoder model = CrossEncoder('model_name') scores = model.predict([('Sentence 1', 'Sentence 2'), ('Sentence 3', 'Sentence 4')]) ``` The model will predict scores for the pairs `('Sentence 1', 'Sentence 2')` and `('Sentence 3', 'Sentence 4')`. You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class
{"license": "apache-2.0"}
cross-encoder/stsb-TinyBERT-L-4
null
[ "transformers", "pytorch", "jax", "bert", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #bert #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Cross-Encoder for Quora Duplicate Questions Detection This model was trained using SentenceTransformers Cross-Encoder class. ## Training Data This model was trained on the STS benchmark dataset. The model will predict a score between 0 and 1 how for the semantic similarity of two sentences. ## Usage and Performance Pre-trained models can be used like this: The model will predict scores for the pairs '('Sentence 1', 'Sentence 2')' and '('Sentence 3', 'Sentence 4')'. You can use this model also without sentence_transformers and by just using Transformers ''AutoModel'' class
[ "# Cross-Encoder for Quora Duplicate Questions Detection\nThis model was trained using SentenceTransformers Cross-Encoder class.", "## Training Data\nThis model was trained on the STS benchmark dataset. The model will predict a score between 0 and 1 how for the semantic similarity of two sentences.", "## Usage and Performance\n\nPre-trained models can be used like this:\n\n\nThe model will predict scores for the pairs '('Sentence 1', 'Sentence 2')' and '('Sentence 3', 'Sentence 4')'.\n\nYou can use this model also without sentence_transformers and by just using Transformers ''AutoModel'' class" ]
[ "TAGS\n#transformers #pytorch #jax #bert #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Cross-Encoder for Quora Duplicate Questions Detection\nThis model was trained using SentenceTransformers Cross-Encoder class.", "## Training Data\nThis model was trained on the STS benchmark dataset. The model will predict a score between 0 and 1 how for the semantic similarity of two sentences.", "## Usage and Performance\n\nPre-trained models can be used like this:\n\n\nThe model will predict scores for the pairs '('Sentence 1', 'Sentence 2')' and '('Sentence 3', 'Sentence 4')'.\n\nYou can use this model also without sentence_transformers and by just using Transformers ''AutoModel'' class" ]
text-classification
transformers
# Cross-Encoder for Quora Duplicate Questions Detection This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data This model was trained on the [STS benchmark dataset](http://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark). The model will predict a score between 0 and 1 how for the semantic similarity of two sentences. ## Usage and Performance Pre-trained models can be used like this: ``` from sentence_transformers import CrossEncoder model = CrossEncoder('model_name') scores = model.predict([('Sentence 1', 'Sentence 2'), ('Sentence 3', 'Sentence 4')]) ``` The model will predict scores for the pairs `('Sentence 1', 'Sentence 2')` and `('Sentence 3', 'Sentence 4')`. You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class
{"license": "apache-2.0"}
cross-encoder/stsb-distilroberta-base
null
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #roberta #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# Cross-Encoder for Quora Duplicate Questions Detection This model was trained using SentenceTransformers Cross-Encoder class. ## Training Data This model was trained on the STS benchmark dataset. The model will predict a score between 0 and 1 how for the semantic similarity of two sentences. ## Usage and Performance Pre-trained models can be used like this: The model will predict scores for the pairs '('Sentence 1', 'Sentence 2')' and '('Sentence 3', 'Sentence 4')'. You can use this model also without sentence_transformers and by just using Transformers ''AutoModel'' class
[ "# Cross-Encoder for Quora Duplicate Questions Detection\nThis model was trained using SentenceTransformers Cross-Encoder class.", "## Training Data\nThis model was trained on the STS benchmark dataset. The model will predict a score between 0 and 1 how for the semantic similarity of two sentences.", "## Usage and Performance\n\nPre-trained models can be used like this:\n\n\nThe model will predict scores for the pairs '('Sentence 1', 'Sentence 2')' and '('Sentence 3', 'Sentence 4')'.\n\nYou can use this model also without sentence_transformers and by just using Transformers ''AutoModel'' class" ]
[ "TAGS\n#transformers #pytorch #jax #roberta #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Cross-Encoder for Quora Duplicate Questions Detection\nThis model was trained using SentenceTransformers Cross-Encoder class.", "## Training Data\nThis model was trained on the STS benchmark dataset. The model will predict a score between 0 and 1 how for the semantic similarity of two sentences.", "## Usage and Performance\n\nPre-trained models can be used like this:\n\n\nThe model will predict scores for the pairs '('Sentence 1', 'Sentence 2')' and '('Sentence 3', 'Sentence 4')'.\n\nYou can use this model also without sentence_transformers and by just using Transformers ''AutoModel'' class" ]
text-classification
transformers
# Cross-Encoder for Quora Duplicate Questions Detection This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data This model was trained on the [STS benchmark dataset](http://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark). The model will predict a score between 0 and 1 how for the semantic similarity of two sentences. ## Usage and Performance Pre-trained models can be used like this: ``` from sentence_transformers import CrossEncoder model = CrossEncoder('model_name') scores = model.predict([('Sentence 1', 'Sentence 2'), ('Sentence 3', 'Sentence 4')]) ``` The model will predict scores for the pairs `('Sentence 1', 'Sentence 2')` and `('Sentence 3', 'Sentence 4')`. You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class
{"license": "apache-2.0"}
cross-encoder/stsb-roberta-base
null
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #roberta #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# Cross-Encoder for Quora Duplicate Questions Detection This model was trained using SentenceTransformers Cross-Encoder class. ## Training Data This model was trained on the STS benchmark dataset. The model will predict a score between 0 and 1 how for the semantic similarity of two sentences. ## Usage and Performance Pre-trained models can be used like this: The model will predict scores for the pairs '('Sentence 1', 'Sentence 2')' and '('Sentence 3', 'Sentence 4')'. You can use this model also without sentence_transformers and by just using Transformers ''AutoModel'' class
[ "# Cross-Encoder for Quora Duplicate Questions Detection\nThis model was trained using SentenceTransformers Cross-Encoder class.", "## Training Data\nThis model was trained on the STS benchmark dataset. The model will predict a score between 0 and 1 how for the semantic similarity of two sentences.", "## Usage and Performance\n\nPre-trained models can be used like this:\n\n\nThe model will predict scores for the pairs '('Sentence 1', 'Sentence 2')' and '('Sentence 3', 'Sentence 4')'.\n\nYou can use this model also without sentence_transformers and by just using Transformers ''AutoModel'' class" ]
[ "TAGS\n#transformers #pytorch #jax #roberta #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Cross-Encoder for Quora Duplicate Questions Detection\nThis model was trained using SentenceTransformers Cross-Encoder class.", "## Training Data\nThis model was trained on the STS benchmark dataset. The model will predict a score between 0 and 1 how for the semantic similarity of two sentences.", "## Usage and Performance\n\nPre-trained models can be used like this:\n\n\nThe model will predict scores for the pairs '('Sentence 1', 'Sentence 2')' and '('Sentence 3', 'Sentence 4')'.\n\nYou can use this model also without sentence_transformers and by just using Transformers ''AutoModel'' class" ]
text-classification
transformers
# Cross-Encoder for Quora Duplicate Questions Detection This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data This model was trained on the [STS benchmark dataset](http://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark). The model will predict a score between 0 and 1 how for the semantic similarity of two sentences. ## Usage and Performance Pre-trained models can be used like this: ``` from sentence_transformers import CrossEncoder model = CrossEncoder('model_name') scores = model.predict([('Sentence 1', 'Sentence 2'), ('Sentence 3', 'Sentence 4')]) ``` The model will predict scores for the pairs `('Sentence 1', 'Sentence 2')` and `('Sentence 3', 'Sentence 4')`. You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class
{"license": "apache-2.0"}
cross-encoder/stsb-roberta-large
null
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #roberta #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# Cross-Encoder for Quora Duplicate Questions Detection This model was trained using SentenceTransformers Cross-Encoder class. ## Training Data This model was trained on the STS benchmark dataset. The model will predict a score between 0 and 1 how for the semantic similarity of two sentences. ## Usage and Performance Pre-trained models can be used like this: The model will predict scores for the pairs '('Sentence 1', 'Sentence 2')' and '('Sentence 3', 'Sentence 4')'. You can use this model also without sentence_transformers and by just using Transformers ''AutoModel'' class
[ "# Cross-Encoder for Quora Duplicate Questions Detection\nThis model was trained using SentenceTransformers Cross-Encoder class.", "## Training Data\nThis model was trained on the STS benchmark dataset. The model will predict a score between 0 and 1 how for the semantic similarity of two sentences.", "## Usage and Performance\n\nPre-trained models can be used like this:\n\n\nThe model will predict scores for the pairs '('Sentence 1', 'Sentence 2')' and '('Sentence 3', 'Sentence 4')'.\n\nYou can use this model also without sentence_transformers and by just using Transformers ''AutoModel'' class" ]
[ "TAGS\n#transformers #pytorch #jax #roberta #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Cross-Encoder for Quora Duplicate Questions Detection\nThis model was trained using SentenceTransformers Cross-Encoder class.", "## Training Data\nThis model was trained on the STS benchmark dataset. The model will predict a score between 0 and 1 how for the semantic similarity of two sentences.", "## Usage and Performance\n\nPre-trained models can be used like this:\n\n\nThe model will predict scores for the pairs '('Sentence 1', 'Sentence 2')' and '('Sentence 3', 'Sentence 4')'.\n\nYou can use this model also without sentence_transformers and by just using Transformers ''AutoModel'' class" ]
text-generation
transformers
### Kw2Poem
{"language": "vi", "tags": ["gpt"], "widget": [{"text": "<s> n\u00fai nh\u00e0 xe [SEP] "}]}
crylake/kw2poem-generation
null
[ "transformers", "pytorch", "gpt2", "text-generation", "gpt", "vi", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "vi" ]
TAGS #transformers #pytorch #gpt2 #text-generation #gpt #vi #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
### Kw2Poem
[ "### Kw2Poem" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #gpt #vi #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Kw2Poem" ]
text-generation
transformers
#Rick Dialogpt model
{"tags": ["conversational"]}
crystalgate/DialoGPT-small-rick
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#Rick Dialogpt model
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
token-classification
spacy
NER Model for 'Ministerratsprotokolle' | Feature | Description | | --- | --- | | **Name** | `de_MRP_NER` | | **Version** | `0.0.0` | | **spaCy** | `>=3.1.0,<3.2.0` | | **Default Pipeline** | `tok2vec`, `ner` | | **Components** | `tok2vec`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | `cc-by` | | **Author** | [Peter Andorfer]() | ### Label Scheme <details> <summary>View label scheme (4 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `GPE`, `LOC`, `ORG`, `PER` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 88.04 | | `ENTS_P` | 90.53 | | `ENTS_R` | 85.69 | | `TOK2VEC_LOSS` | 40077.56 | | `NER_LOSS` | 77727.57 |
{"language": ["de"], "license": "cc-by-4.0", "tags": ["spacy", "token-classification"]}
csae8092/de_MRP_NER
null
[ "spacy", "token-classification", "de", "license:cc-by-4.0", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "de" ]
TAGS #spacy #token-classification #de #license-cc-by-4.0 #model-index #region-us
NER Model for 'Ministerratsprotokolle' ### Label Scheme View label scheme (4 labels for 1 components) ### Accuracy
[ "### Label Scheme\n\n\n\nView label scheme (4 labels for 1 components)", "### Accuracy" ]
[ "TAGS\n#spacy #token-classification #de #license-cc-by-4.0 #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (4 labels for 1 components)", "### Accuracy" ]
token-classification
spacy
Regensburger Reichstag von 1576 | Feature | Description | | --- | --- | | **Name** | `de_RTA_NER` | | **Version** | `0.0.0` | | **spaCy** | `>=3.1.0,<3.2.0` | | **Default Pipeline** | `tok2vec`, `ner` | | **Components** | `tok2vec`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | `https://creativecommons.org/licenses/by-nc/4.0/` | | **Author** | [n/a](https://reichstagsakten-1576.uni-graz.at) | ### Label Scheme <details> <summary>View label scheme (4 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `DATE`, `LOC`, `PER`, `TIME` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 86.86 | | `ENTS_P` | 86.30 | | `ENTS_R` | 87.43 | | `TOK2VEC_LOSS` | 43588.74 | | `NER_LOSS` | 95573.96 |
{"language": ["de"], "license": "cc-by-nc-4.0", "tags": ["spacy", "token-classification"]}
csae8092/de_RTA_NER
null
[ "spacy", "token-classification", "de", "license:cc-by-nc-4.0", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "de" ]
TAGS #spacy #token-classification #de #license-cc-by-nc-4.0 #model-index #region-us
Regensburger Reichstag von 1576 ### Label Scheme View label scheme (4 labels for 1 components) ### Accuracy
[ "### Label Scheme\n\n\n\nView label scheme (4 labels for 1 components)", "### Accuracy" ]
[ "TAGS\n#spacy #token-classification #de #license-cc-by-nc-4.0 #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (4 labels for 1 components)", "### Accuracy" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-bne-finetuned-amazon_reviews_multi This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.2303 - Accuracy: 0.9325 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1942 | 1.0 | 1250 | 0.1751 | 0.932 | | 0.0935 | 2.0 | 2500 | 0.2303 | 0.9325 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "metrics": ["accuracy"], "model-index": [{"name": "roberta-base-bne-finetuned-amazon_reviews_multi", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "amazon_reviews_multi", "type": "amazon_reviews_multi", "args": "es"}, "metrics": [{"type": "accuracy", "value": 0.9325, "name": "Accuracy"}]}]}]}
csalamea/roberta-base-bne-finetuned-amazon_reviews_multi
null
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "dataset:amazon_reviews_multi", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
roberta-base-bne-finetuned-amazon\_reviews\_multi ================================================= This model is a fine-tuned version of BSC-TeMU/roberta-base-bne on the amazon\_reviews\_multi dataset. It achieves the following results on the evaluation set: * Loss: 0.2303 * Accuracy: 0.9325 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.10.2 * Pytorch 1.9.0+cu102 * Datasets 1.12.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3" ]
question-answering
transformers
## BERT-base uncased model fine-tuned on SQuAD v1 This model was fine-tuned from the HuggingFace [BERT](https://www.aclweb.org/anthology/N19-1423/) base uncased checkpoint on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer). This model is case-insensitive: it does not make a difference between english and English. ## Details | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD1.1 | train | 90.6K | | SQuAD1.1 | eval | 11.1k | ### Fine-tuning - Python: `3.7.5` - Machine specs: `CPU: Intel(R) Core(TM) i7-6800K CPU @ 3.40GHz` `Memory: 32 GiB` `GPUs: 2 GeForce GTX 1070, each with 8GiB memory` `GPU driver: 418.87.01, CUDA: 10.1` - script: ```shell # after install https://github.com/huggingface/transformers cd examples/question-answering mkdir -p data wget -O data/train-v1.1.json https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json wget -O data/dev-v1.1.json https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json python run_squad.py \ --model_type bert \ --model_name_or_path bert-base-uncased \ --do_train \ --do_eval \ --do_lower_case \ --train_file train-v1.1.json \ --predict_file dev-v1.1.json \ --per_gpu_train_batch_size 12 \ --per_gpu_eval_batch_size=16 \ --learning_rate 3e-5 \ --num_train_epochs 2.0 \ --max_seq_length 320 \ --doc_stride 128 \ --data_dir data \ --output_dir data/bert-base-uncased-squad-v1 2>&1 | tee train-energy-bert-base-squad-v1.log ``` It took about 2 hours to finish. ### Results **Model size**: `418M` | Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))| | ------ | --------- | --------- | | **EM** | **80.9** | **80.8** | | **F1** | **88.2** | **88.5** | Note that the above results didn't involve any hyperparameter search. ## Example Usage ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="csarron/bert-base-uncased-squad-v1", tokenizer="csarron/bert-base-uncased-squad-v1" ) predictions = qa_pipeline({ 'context': "The game was played on February 7, 2016 at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California.", 'question': "What day was the game played on?" }) print(predictions) # output: # {'score': 0.8730505704879761, 'start': 23, 'end': 39, 'answer': 'February 7, 2016'} ``` > Created by [Qingqing Cao](https://awk.ai/) | [GitHub](https://github.com/csarron) | [Twitter](https://twitter.com/sysnlp) > Made with ❤️ in New York.
{"language": "en", "license": "mit", "tags": ["question-answering", "bert", "bert-base"], "datasets": ["squad"], "metrics": ["squad"], "widget": [{"text": "Which name is also used to describe the Amazon rainforest in English?", "context": "The Amazon rainforest (Portuguese: Floresta Amaz\u00f4nica or Amaz\u00f4nia; Spanish: Selva Amaz\u00f3nica, Amazon\u00eda or usually Amazonia; French: For\u00eat amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species."}, {"text": "How many square kilometers of rainforest is covered in the basin?", "context": "The Amazon rainforest (Portuguese: Floresta Amaz\u00f4nica or Amaz\u00f4nia; Spanish: Selva Amaz\u00f3nica, Amazon\u00eda or usually Amazonia; French: For\u00eat amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species."}], "model-index": [{"name": "csarron/bert-base-uncased-squad-v1", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad", "type": "squad", "config": "plain_text", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 80.9104, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDJlOWQ0OTE0ZjRhMTQwNDY5MjVhYmZiN2RmYzY0OWJiOWUyNjcyMWU5N2I3YmU0OThjZTVjNTc2MjM2Yzg5NiIsInZlcnNpb24iOjF9.cuJ34B-ngUur5wKGhfhVP8FM6NX4IFrIJEdXypbLQJw1i8M5Bb2EeIs-0M5n35YIx2PfqSQcnVj_jP8vLUk4Dg"}, {"type": "f1", "value": 88.2302, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmE4NzFmNDA3MDRiODk3ZDg5NWYyNjczOGE5YjdkZWQ0ZmEzNWU5YjFjMzc1ODA2OGRjYzU0Y2M5MmU0NGNhYSIsInZlcnNpb24iOjF9.phmkVWF3I-rl2xrHW0EW9OQqzfuefoqNjWplOpFdzJuW8d2C4sJ8snW0Ikw9kQqZaBCdwdkmsf5VTgOupHb8Dw"}]}]}]}
csarron/bert-base-uncased-squad-v1
null
[ "transformers", "pytorch", "jax", "safetensors", "bert", "question-answering", "bert-base", "en", "dataset:squad", "license:mit", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #jax #safetensors #bert #question-answering #bert-base #en #dataset-squad #license-mit #model-index #endpoints_compatible #has_space #region-us
BERT-base uncased model fine-tuned on SQuAD v1 ---------------------------------------------- This model was fine-tuned from the HuggingFace BERT base uncased checkpoint on SQuAD1.1. This model is case-insensitive: it does not make a difference between english and English. Details ------- Dataset: SQuAD1.1, Split: train, # samples: 90.6K Dataset: SQuAD1.1, Split: eval, # samples: 11.1k ### Fine-tuning * Python: '3.7.5' * Machine specs: 'CPU: Intel(R) Core(TM) i7-6800K CPU @ 3.40GHz' 'Memory: 32 GiB' 'GPUs: 2 GeForce GTX 1070, each with 8GiB memory' 'GPU driver: 418.87.01, CUDA: 10.1' * script: It took about 2 hours to finish. ### Results Model size: '418M' Metric: EM, # Value: 80.9, # Original (Table 2): 80.8 Metric: F1, # Value: 88.2, # Original (Table 2): 88.5 Note that the above results didn't involve any hyperparameter search. Example Usage ------------- > > Created by Qingqing Cao | GitHub | Twitter > > > > > Made with ️ in New York. > > >
[ "# samples: 90.6K\nDataset: SQuAD1.1, Split: eval, # samples: 11.1k", "### Fine-tuning\n\n\n* Python: '3.7.5'\n* Machine specs:\n\n\n'CPU: Intel(R) Core(TM) i7-6800K CPU @ 3.40GHz'\n\n\n'Memory: 32 GiB'\n\n\n'GPUs: 2 GeForce GTX 1070, each with 8GiB memory'\n\n\n'GPU driver: 418.87.01, CUDA: 10.1'\n* script:\n\n\nIt took about 2 hours to finish.", "### Results\n\n\nModel size: '418M'\n\n\nMetric: EM, # Value: 80.9, # Original (Table 2): 80.8\nMetric: F1, # Value: 88.2, # Original (Table 2): 88.5\n\n\nNote that the above results didn't involve any hyperparameter search.\n\n\nExample Usage\n-------------\n\n\n\n> \n> Created by Qingqing Cao | GitHub | Twitter\n> \n> \n> \n\n\n\n> \n> Made with ️ in New York.\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #safetensors #bert #question-answering #bert-base #en #dataset-squad #license-mit #model-index #endpoints_compatible #has_space #region-us \n", "# samples: 90.6K\nDataset: SQuAD1.1, Split: eval, # samples: 11.1k", "### Fine-tuning\n\n\n* Python: '3.7.5'\n* Machine specs:\n\n\n'CPU: Intel(R) Core(TM) i7-6800K CPU @ 3.40GHz'\n\n\n'Memory: 32 GiB'\n\n\n'GPUs: 2 GeForce GTX 1070, each with 8GiB memory'\n\n\n'GPU driver: 418.87.01, CUDA: 10.1'\n* script:\n\n\nIt took about 2 hours to finish.", "### Results\n\n\nModel size: '418M'\n\n\nMetric: EM, # Value: 80.9, # Original (Table 2): 80.8\nMetric: F1, # Value: 88.2, # Original (Table 2): 88.5\n\n\nNote that the above results didn't involve any hyperparameter search.\n\n\nExample Usage\n-------------\n\n\n\n> \n> Created by Qingqing Cao | GitHub | Twitter\n> \n> \n> \n\n\n\n> \n> Made with ️ in New York.\n> \n> \n>" ]
question-answering
transformers
## MobileBERT fine-tuned on SQuAD v1 [MobileBERT](https://arxiv.org/abs/2004.02984) is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. This model was fine-tuned from the HuggingFace checkpoint `google/mobilebert-uncased` on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer). ## Details | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD1.1 | train | 90.6K | | SQuAD1.1 | eval | 11.1k | ### Fine-tuning - Python: `3.7.5` - Machine specs: `CPU: Intel(R) Core(TM) i7-6800K CPU @ 3.40GHz` `Memory: 32 GiB` `GPUs: 2 GeForce GTX 1070, each with 8GiB memory` `GPU driver: 418.87.01, CUDA: 10.1` - script: ```shell # after install https://github.com/huggingface/transformers cd examples/question-answering mkdir -p data wget -O data/train-v1.1.json https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json wget -O data/dev-v1.1.json https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json export SQUAD_DIR=`pwd`/data python run_squad.py \ --model_type mobilebert \ --model_name_or_path google/mobilebert-uncased \ --do_train \ --do_eval \ --do_lower_case \ --train_file $SQUAD_DIR/train-v1.1.json \ --predict_file $SQUAD_DIR/dev-v1.1.json \ --per_gpu_train_batch_size 16 \ --per_gpu_eval_batch_size 16 \ --learning_rate 4e-5 \ --num_train_epochs 5.0 \ --max_seq_length 320 \ --doc_stride 128 \ --warmup_steps 1400 \ --output_dir $SQUAD_DIR/mobilebert-uncased-warmup-squad_v1 2>&1 | tee train-mobilebert-warmup-squad_v1.log ``` It took about 3 hours to finish. ### Results **Model size**: `95M` | Metric | # Value | # Original ([Table 5](https://arxiv.org/pdf/2004.02984.pdf))| | ------ | --------- | --------- | | **EM** | **82.6** | **82.9** | | **F1** | **90.0** | **90.0** | Note that the above results didn't involve any hyperparameter search. ## Example Usage ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="csarron/mobilebert-uncased-squad-v1", tokenizer="csarron/mobilebert-uncased-squad-v1" ) predictions = qa_pipeline({ 'context': "The game was played on February 7, 2016 at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California.", 'question': "What day was the game played on?" }) print(predictions) # output: # {'score': 0.7754058241844177, 'start': 23, 'end': 39, 'answer': 'February 7, 2016'} ``` > Created by [Qingqing Cao](https://awk.ai/) | [GitHub](https://github.com/csarron) | [Twitter](https://twitter.com/sysnlp) > Made with ❤️ in New York.
{"language": "en", "license": "mit", "tags": ["question-answering", "mobilebert"], "datasets": ["squad"], "metrics": ["squad"], "widget": [{"text": "Which name is also used to describe the Amazon rainforest in English?", "context": "The Amazon rainforest (Portuguese: Floresta Amaz\u00f4nica or Amaz\u00f4nia; Spanish: Selva Amaz\u00f3nica, Amazon\u00eda or usually Amazonia; French: For\u00eat amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species."}, {"text": "How many square kilometers of rainforest is covered in the basin?", "context": "The Amazon rainforest (Portuguese: Floresta Amaz\u00f4nica or Amaz\u00f4nia; Spanish: Selva Amaz\u00f3nica, Amazon\u00eda or usually Amazonia; French: For\u00eat amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species."}]}
csarron/mobilebert-uncased-squad-v1
null
[ "transformers", "pytorch", "safetensors", "mobilebert", "question-answering", "en", "dataset:squad", "arxiv:2004.02984", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.02984" ]
[ "en" ]
TAGS #transformers #pytorch #safetensors #mobilebert #question-answering #en #dataset-squad #arxiv-2004.02984 #license-mit #endpoints_compatible #region-us
MobileBERT fine-tuned on SQuAD v1 --------------------------------- MobileBERT is a thin version of BERT\_LARGE, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. This model was fine-tuned from the HuggingFace checkpoint 'google/mobilebert-uncased' on SQuAD1.1. Details ------- Dataset: SQuAD1.1, Split: train, # samples: 90.6K Dataset: SQuAD1.1, Split: eval, # samples: 11.1k ### Fine-tuning * Python: '3.7.5' * Machine specs: 'CPU: Intel(R) Core(TM) i7-6800K CPU @ 3.40GHz' 'Memory: 32 GiB' 'GPUs: 2 GeForce GTX 1070, each with 8GiB memory' 'GPU driver: 418.87.01, CUDA: 10.1' * script: It took about 3 hours to finish. ### Results Model size: '95M' Metric: EM, # Value: 82.6, # Original (Table 5): 82.9 Metric: F1, # Value: 90.0, # Original (Table 5): 90.0 Note that the above results didn't involve any hyperparameter search. Example Usage ------------- > > Created by Qingqing Cao | GitHub | Twitter > > > > > Made with ️ in New York. > > >
[ "# samples: 90.6K\nDataset: SQuAD1.1, Split: eval, # samples: 11.1k", "### Fine-tuning\n\n\n* Python: '3.7.5'\n* Machine specs:\n\n\n'CPU: Intel(R) Core(TM) i7-6800K CPU @ 3.40GHz'\n\n\n'Memory: 32 GiB'\n\n\n'GPUs: 2 GeForce GTX 1070, each with 8GiB memory'\n\n\n'GPU driver: 418.87.01, CUDA: 10.1'\n* script:\n\n\nIt took about 3 hours to finish.", "### Results\n\n\nModel size: '95M'\n\n\nMetric: EM, # Value: 82.6, # Original (Table 5): 82.9\nMetric: F1, # Value: 90.0, # Original (Table 5): 90.0\n\n\nNote that the above results didn't involve any hyperparameter search.\n\n\nExample Usage\n-------------\n\n\n\n> \n> Created by Qingqing Cao | GitHub | Twitter\n> \n> \n> \n\n\n\n> \n> Made with ️ in New York.\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #safetensors #mobilebert #question-answering #en #dataset-squad #arxiv-2004.02984 #license-mit #endpoints_compatible #region-us \n", "# samples: 90.6K\nDataset: SQuAD1.1, Split: eval, # samples: 11.1k", "### Fine-tuning\n\n\n* Python: '3.7.5'\n* Machine specs:\n\n\n'CPU: Intel(R) Core(TM) i7-6800K CPU @ 3.40GHz'\n\n\n'Memory: 32 GiB'\n\n\n'GPUs: 2 GeForce GTX 1070, each with 8GiB memory'\n\n\n'GPU driver: 418.87.01, CUDA: 10.1'\n* script:\n\n\nIt took about 3 hours to finish.", "### Results\n\n\nModel size: '95M'\n\n\nMetric: EM, # Value: 82.6, # Original (Table 5): 82.9\nMetric: F1, # Value: 90.0, # Original (Table 5): 90.0\n\n\nNote that the above results didn't involve any hyperparameter search.\n\n\nExample Usage\n-------------\n\n\n\n> \n> Created by Qingqing Cao | GitHub | Twitter\n> \n> \n> \n\n\n\n> \n> Made with ️ in New York.\n> \n> \n>" ]
question-answering
transformers
## MobileBERT fine-tuned on SQuAD v2 [MobileBERT](https://arxiv.org/abs/2004.02984) is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. This model was fine-tuned from the HuggingFace checkpoint `google/mobilebert-uncased` on [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer). ## Details | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD2.0 | train | 130k | | SQuAD2.0 | eval | 12.3k | ### Fine-tuning - Python: `3.7.5` - Machine specs: `CPU: Intel(R) Core(TM) i7-6800K CPU @ 3.40GHz` `Memory: 32 GiB` `GPUs: 2 GeForce GTX 1070, each with 8GiB memory` `GPU driver: 418.87.01, CUDA: 10.1` - script: ```shell # after install https://github.com/huggingface/transformers cd examples/question-answering mkdir -p data wget -O data/train-v2.0.json https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json wget -O data/dev-v2.0.json https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json export SQUAD_DIR=`pwd`/data python run_squad.py \ --model_type mobilebert \ --model_name_or_path google/mobilebert-uncased \ --do_train \ --do_eval \ --do_lower_case \ --version_2_with_negative \ --train_file $SQUAD_DIR/train-v2.0.json \ --predict_file $SQUAD_DIR/dev-v2.0.json \ --per_gpu_train_batch_size 16 \ --per_gpu_eval_batch_size 16 \ --learning_rate 4e-5 \ --num_train_epochs 5.0 \ --max_seq_length 320 \ --doc_stride 128 \ --warmup_steps 1400 \ --save_steps 2000 \ --output_dir $SQUAD_DIR/mobilebert-uncased-warmup-squad_v2 2>&1 | tee train-mobilebert-warmup-squad_v2.log ``` It took about 3.5 hours to finish. ### Results **Model size**: `95M` | Metric | # Value | # Original ([Table 5](https://arxiv.org/pdf/2004.02984.pdf))| | ------ | --------- | --------- | | **EM** | **75.2** | **76.2** | | **F1** | **78.8** | **79.2** | Note that the above results didn't involve any hyperparameter search. ## Example Usage ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="csarron/mobilebert-uncased-squad-v2", tokenizer="csarron/mobilebert-uncased-squad-v2" ) predictions = qa_pipeline({ 'context': "The game was played on February 7, 2016 at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California.", 'question': "What day was the game played on?" }) print(predictions) # output: # {'score': 0.71434086561203, 'start': 23, 'end': 39, 'answer': 'February 7, 2016'} ``` > Created by [Qingqing Cao](https://awk.ai/) | [GitHub](https://github.com/csarron) | [Twitter](https://twitter.com/sysnlp) > Made with ❤️ in New York.
{"language": "en", "license": "mit", "tags": ["question-answering", "mobilebert"], "datasets": ["squad_v2"], "metrics": ["squad_v2"], "widget": [{"text": "Which name is also used to describe the Amazon rainforest in English?", "context": "The Amazon rainforest (Portuguese: Floresta Amaz\u00f4nica or Amaz\u00f4nia; Spanish: Selva Amaz\u00f3nica, Amazon\u00eda or usually Amazonia; French: For\u00eat amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species."}, {"text": "How many square kilometers of rainforest is covered in the basin?", "context": "The Amazon rainforest (Portuguese: Floresta Amaz\u00f4nica or Amaz\u00f4nia; Spanish: Selva Amaz\u00f3nica, Amazon\u00eda or usually Amazonia; French: For\u00eat amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species."}]}
csarron/mobilebert-uncased-squad-v2
null
[ "transformers", "pytorch", "onnx", "safetensors", "mobilebert", "question-answering", "en", "dataset:squad_v2", "arxiv:2004.02984", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.02984" ]
[ "en" ]
TAGS #transformers #pytorch #onnx #safetensors #mobilebert #question-answering #en #dataset-squad_v2 #arxiv-2004.02984 #license-mit #endpoints_compatible #region-us
MobileBERT fine-tuned on SQuAD v2 --------------------------------- MobileBERT is a thin version of BERT\_LARGE, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. This model was fine-tuned from the HuggingFace checkpoint 'google/mobilebert-uncased' on SQuAD2.0. Details ------- Dataset: SQuAD2.0, Split: train, # samples: 130k Dataset: SQuAD2.0, Split: eval, # samples: 12.3k ### Fine-tuning * Python: '3.7.5' * Machine specs: 'CPU: Intel(R) Core(TM) i7-6800K CPU @ 3.40GHz' 'Memory: 32 GiB' 'GPUs: 2 GeForce GTX 1070, each with 8GiB memory' 'GPU driver: 418.87.01, CUDA: 10.1' * script: It took about 3.5 hours to finish. ### Results Model size: '95M' Metric: EM, # Value: 75.2, # Original (Table 5): 76.2 Metric: F1, # Value: 78.8, # Original (Table 5): 79.2 Note that the above results didn't involve any hyperparameter search. Example Usage ------------- > > Created by Qingqing Cao | GitHub | Twitter > > > > > Made with ️ in New York. > > >
[ "# samples: 130k\nDataset: SQuAD2.0, Split: eval, # samples: 12.3k", "### Fine-tuning\n\n\n* Python: '3.7.5'\n* Machine specs:\n\n\n'CPU: Intel(R) Core(TM) i7-6800K CPU @ 3.40GHz'\n\n\n'Memory: 32 GiB'\n\n\n'GPUs: 2 GeForce GTX 1070, each with 8GiB memory'\n\n\n'GPU driver: 418.87.01, CUDA: 10.1'\n* script:\n\n\nIt took about 3.5 hours to finish.", "### Results\n\n\nModel size: '95M'\n\n\nMetric: EM, # Value: 75.2, # Original (Table 5): 76.2\nMetric: F1, # Value: 78.8, # Original (Table 5): 79.2\n\n\nNote that the above results didn't involve any hyperparameter search.\n\n\nExample Usage\n-------------\n\n\n\n> \n> Created by Qingqing Cao | GitHub | Twitter\n> \n> \n> \n\n\n\n> \n> Made with ️ in New York.\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #onnx #safetensors #mobilebert #question-answering #en #dataset-squad_v2 #arxiv-2004.02984 #license-mit #endpoints_compatible #region-us \n", "# samples: 130k\nDataset: SQuAD2.0, Split: eval, # samples: 12.3k", "### Fine-tuning\n\n\n* Python: '3.7.5'\n* Machine specs:\n\n\n'CPU: Intel(R) Core(TM) i7-6800K CPU @ 3.40GHz'\n\n\n'Memory: 32 GiB'\n\n\n'GPUs: 2 GeForce GTX 1070, each with 8GiB memory'\n\n\n'GPU driver: 418.87.01, CUDA: 10.1'\n* script:\n\n\nIt took about 3.5 hours to finish.", "### Results\n\n\nModel size: '95M'\n\n\nMetric: EM, # Value: 75.2, # Original (Table 5): 76.2\nMetric: F1, # Value: 78.8, # Original (Table 5): 79.2\n\n\nNote that the above results didn't involve any hyperparameter search.\n\n\nExample Usage\n-------------\n\n\n\n> \n> Created by Qingqing Cao | GitHub | Twitter\n> \n> \n> \n\n\n\n> \n> Made with ️ in New York.\n> \n> \n>" ]
question-answering
transformers
## RoBERTa-base fine-tuned on SQuAD v1 This model was fine-tuned from the HuggingFace [RoBERTa](https://arxiv.org/abs/1907.11692) base checkpoint on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer). This model is case-sensitive: it makes a difference between english and English. ## Details | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD1.1 | train | 96.8K | | SQuAD1.1 | eval | 11.8k | ### Fine-tuning - Python: `3.7.5` - Machine specs: `CPU: Intel(R) Core(TM) i7-6800K CPU @ 3.40GHz` `Memory: 32 GiB` `GPUs: 2 GeForce GTX 1070, each with 8GiB memory` `GPU driver: 418.87.01, CUDA: 10.1` - script: ```shell # after install https://github.com/huggingface/transformers cd examples/question-answering mkdir -p data wget -O data/train-v1.1.json https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json wget -O data/dev-v1.1.json https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json python run_energy_squad.py \ --model_type roberta \ --model_name_or_path roberta-base \ --do_train \ --do_eval \ --train_file train-v1.1.json \ --predict_file dev-v1.1.json \ --per_gpu_train_batch_size 12 \ --per_gpu_eval_batch_size 16 \ --learning_rate 3e-5 \ --num_train_epochs 2.0 \ --max_seq_length 320 \ --doc_stride 128 \ --data_dir data \ --output_dir data/roberta-base-squad-v1 2>&1 | tee train-roberta-base-squad-v1.log ``` It took about 2 hours to finish. ### Results **Model size**: `477M` | Metric | # Value | | ------ | --------- | | **EM** | **83.0** | | **F1** | **90.4** | Note that the above results didn't involve any hyperparameter search. ## Example Usage ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="csarron/roberta-base-squad-v1", tokenizer="csarron/roberta-base-squad-v1" ) predictions = qa_pipeline({ 'context': "The game was played on February 7, 2016 at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California.", 'question': "What day was the game played on?" }) print(predictions) # output: # {'score': 0.8625259399414062, 'start': 23, 'end': 39, 'answer': 'February 7, 2016'} ``` > Created by [Qingqing Cao](https://awk.ai/) | [GitHub](https://github.com/csarron) | [Twitter](https://twitter.com/sysnlp) > Made with ❤️ in New York.
{"language": "en", "license": "mit", "tags": ["question-answering", "roberta", "roberta-base"], "datasets": ["squad"], "metrics": ["squad"], "widget": [{"text": "Which name is also used to describe the Amazon rainforest in English?", "context": "The Amazon rainforest (Portuguese: Floresta Amaz\u00f4nica or Amaz\u00f4nia; Spanish: Selva Amaz\u00f3nica, Amazon\u00eda or usually Amazonia; French: For\u00eat amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species."}, {"text": "How many square kilometers of rainforest is covered in the basin?", "context": "The Amazon rainforest (Portuguese: Floresta Amaz\u00f4nica or Amaz\u00f4nia; Spanish: Selva Amaz\u00f3nica, Amazon\u00eda or usually Amazonia; French: For\u00eat amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species."}]}
csarron/roberta-base-squad-v1
null
[ "transformers", "pytorch", "jax", "safetensors", "roberta", "question-answering", "roberta-base", "en", "dataset:squad", "arxiv:1907.11692", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1907.11692" ]
[ "en" ]
TAGS #transformers #pytorch #jax #safetensors #roberta #question-answering #roberta-base #en #dataset-squad #arxiv-1907.11692 #license-mit #endpoints_compatible #region-us
RoBERTa-base fine-tuned on SQuAD v1 ----------------------------------- This model was fine-tuned from the HuggingFace RoBERTa base checkpoint on SQuAD1.1. This model is case-sensitive: it makes a difference between english and English. Details ------- Dataset: SQuAD1.1, Split: train, # samples: 96.8K Dataset: SQuAD1.1, Split: eval, # samples: 11.8k ### Fine-tuning * Python: '3.7.5' * Machine specs: 'CPU: Intel(R) Core(TM) i7-6800K CPU @ 3.40GHz' 'Memory: 32 GiB' 'GPUs: 2 GeForce GTX 1070, each with 8GiB memory' 'GPU driver: 418.87.01, CUDA: 10.1' * script: It took about 2 hours to finish. ### Results Model size: '477M' Note that the above results didn't involve any hyperparameter search. Example Usage ------------- > > Created by Qingqing Cao | GitHub | Twitter > > > > > Made with ️ in New York. > > >
[ "# samples: 96.8K\nDataset: SQuAD1.1, Split: eval, # samples: 11.8k", "### Fine-tuning\n\n\n* Python: '3.7.5'\n* Machine specs:\n\n\n'CPU: Intel(R) Core(TM) i7-6800K CPU @ 3.40GHz'\n\n\n'Memory: 32 GiB'\n\n\n'GPUs: 2 GeForce GTX 1070, each with 8GiB memory'\n\n\n'GPU driver: 418.87.01, CUDA: 10.1'\n* script:\n\n\nIt took about 2 hours to finish.", "### Results\n\n\nModel size: '477M'\n\n\n\nNote that the above results didn't involve any hyperparameter search.\n\n\nExample Usage\n-------------\n\n\n\n> \n> Created by Qingqing Cao | GitHub | Twitter\n> \n> \n> \n\n\n\n> \n> Made with ️ in New York.\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #safetensors #roberta #question-answering #roberta-base #en #dataset-squad #arxiv-1907.11692 #license-mit #endpoints_compatible #region-us \n", "# samples: 96.8K\nDataset: SQuAD1.1, Split: eval, # samples: 11.8k", "### Fine-tuning\n\n\n* Python: '3.7.5'\n* Machine specs:\n\n\n'CPU: Intel(R) Core(TM) i7-6800K CPU @ 3.40GHz'\n\n\n'Memory: 32 GiB'\n\n\n'GPUs: 2 GeForce GTX 1070, each with 8GiB memory'\n\n\n'GPU driver: 418.87.01, CUDA: 10.1'\n* script:\n\n\nIt took about 2 hours to finish.", "### Results\n\n\nModel size: '477M'\n\n\n\nNote that the above results didn't involve any hyperparameter search.\n\n\nExample Usage\n-------------\n\n\n\n> \n> Created by Qingqing Cao | GitHub | Twitter\n> \n> \n> \n\n\n\n> \n> Made with ️ in New York.\n> \n> \n>" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2175 - Accuracy: 0.923 - F1: 0.9233 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8352 | 1.0 | 250 | 0.3079 | 0.91 | 0.9086 | | 0.247 | 2.0 | 500 | 0.2175 | 0.923 | 0.9233 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.923, "name": "Accuracy"}, {"type": "f1", "value": 0.9232542847906783, "name": "F1"}]}]}]}
cscottp27/distilbert-base-uncased-finetuned-emotion
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-emotion ========================================= This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset. It achieves the following results on the evaluation set: * Loss: 0.2175 * Accuracy: 0.923 * F1: 0.9233 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 64 * eval\_batch\_size: 64 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.0+cu111 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
null
transformers
# BanglaBERT This repository contains the pretrained discriminator checkpoint of the model **BanglaBERT**. This is an [ELECTRA](https://openreview.net/pdf?id=r1xMH1BtvB) discriminator model pretrained with the Replaced Token Detection (RTD) objective. Finetuned models using this checkpoint achieve state-of-the-art results on many of the NLP tasks in bengali. For finetuning on different downstream tasks such as `Sentiment classification`, `Named Entity Recognition`, `Natural Language Inference` etc., refer to the scripts in the official GitHub [repository](https://github.com/csebuetnlp/banglabert). **Note**: This model was pretrained using a specific normalization pipeline available [here](https://github.com/csebuetnlp/normalizer). All finetuning scripts in the official GitHub repository uses this normalization by default. If you need to adapt the pretrained model for a different task make sure the text units are normalized using this pipeline before tokenizing to get best results. A basic example is given below: ## Using this model as a discriminator in `transformers` (tested on 4.11.0.dev0) ```python from transformers import AutoModelForPreTraining, AutoTokenizer from normalizer import normalize # pip install git+https://github.com/csebuetnlp/normalizer import torch model = AutoModelForPreTraining.from_pretrained("csebuetnlp/banglabert") tokenizer = AutoTokenizer.from_pretrained("csebuetnlp/banglabert") original_sentence = "আমি কৃতজ্ঞ কারণ আপনি আমার জন্য অনেক কিছু করেছেন।" fake_sentence = "আমি হতাশ কারণ আপনি আমার জন্য অনেক কিছু করেছেন।" fake_sentence = normalize(fake_sentence) # this normalization step is required before tokenizing the text fake_tokens = tokenizer.tokenize(fake_sentence) fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") discriminator_outputs = model(fake_inputs).logits predictions = torch.round((torch.sign(discriminator_outputs) + 1) / 2) [print("%7s" % token, end="") for token in fake_tokens] print("\n" + "-" * 50) [print("%7s" % int(prediction), end="") for prediction in predictions.squeeze().tolist()[1:-1]] print("\n" + "-" * 50) ``` ## Benchmarks * Zero-shot cross-lingual transfer-learning | Model | Params | SC (macro-F1) | NLI (accuracy) | NER (micro-F1) | QA (EM/F1) | BangLUE score | |----------------|-----------|-----------|-----------|-----------|-----------|-----------| |[mBERT](https://huggingface.co/bert-base-multilingual-cased) | 180M | 27.05 | 62.22 | 39.27 | 59.01/64.18 | 50.35 | |[XLM-R (base)](https://huggingface.co/xlm-roberta-base) | 270M | 42.03 | 72.18 | 45.37 | 55.03/61.83 | 55.29 | |[XLM-R (large)](https://huggingface.co/xlm-roberta-large) | 550M | 49.49 | 78.13 | 56.48 | 71.13/77.70 | 66.59 | |[BanglishBERT](https://huggingface.co/csebuetnlp/banglishbert) | 110M | 48.39 | 75.26 | 55.56 | 72.87/78.63 | 66.14 | * Supervised fine-tuning | Model | Params | SC (macro-F1) | NLI (accuracy) | NER (micro-F1) | QA (EM/F1) | BangLUE score | |----------------|-----------|-----------|-----------|-----------|-----------|-----------| |[mBERT](https://huggingface.co/bert-base-multilingual-cased) | 180M | 67.59 | 75.13 | 68.97 | 67.12/72.64 | 70.29 | |[XLM-R (base)](https://huggingface.co/xlm-roberta-base) | 270M | 69.54 | 78.46 | 73.32 | 68.09/74.27 | 72.82 | |[XLM-R (large)](https://huggingface.co/xlm-roberta-large) | 550M | 70.97 | 82.40 | 78.39 | 73.15/79.06 | 76.79 | |[sahajBERT](https://huggingface.co/neuropark/sahajBERT) | 18M | 71.12 | 76.92 | 70.94 | 65.48/70.69 | 71.03 | |[BanglishBERT](https://huggingface.co/csebuetnlp/banglishbert) | 110M | 70.61 | 80.95 | 76.28 | 72.43/78.40 | 75.73 | |[BanglaBERT](https://huggingface.co/csebuetnlp/banglabert) | 110M | 72.89 | 82.80 | 77.78 | 72.63/79.34 | **77.09** | The benchmarking datasets are as follows: * **SC:** **[Sentiment Classification](https://aclanthology.org/2021.findings-emnlp.278)** * **NER:** **[Named Entity Recognition](https://multiconer.github.io/competition)** * **NLI:** **[Natural Language Inference](https://github.com/csebuetnlp/banglabert/#datasets)** * **QA:** **[Question Answering](https://github.com/csebuetnlp/banglabert/#datasets)** ## Citation If you use this model, please cite the following paper: ``` @inproceedings{bhattacharjee-etal-2022-banglabert, title = "{B}angla{BERT}: Language Model Pretraining and Benchmarks for Low-Resource Language Understanding Evaluation in {B}angla", author = "Bhattacharjee, Abhik and Hasan, Tahmid and Ahmad, Wasi and Mubasshir, Kazi Samin and Islam, Md Saiful and Iqbal, Anindya and Rahman, M. Sohel and Shahriyar, Rifat", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.findings-naacl.98", pages = "1318--1327", abstract = "In this work, we introduce BanglaBERT, a BERT-based Natural Language Understanding (NLU) model pretrained in Bangla, a widely spoken yet low-resource language in the NLP literature. To pretrain BanglaBERT, we collect 27.5 GB of Bangla pretraining data (dubbed {`}Bangla2B+{'}) by crawling 110 popular Bangla sites. We introduce two downstream task datasets on natural language inference and question answering and benchmark on four diverse NLU tasks covering text classification, sequence labeling, and span prediction. In the process, we bring them under the first-ever Bangla Language Understanding Benchmark (BLUB). BanglaBERT achieves state-of-the-art results outperforming multilingual and monolingual models. We are making the models, datasets, and a leaderboard publicly available at \url{https://github.com/csebuetnlp/banglabert} to advance Bangla NLP.", } ``` If you use the normalization module, please cite the following paper: ``` @inproceedings{hasan-etal-2020-low, title = "Not Low-Resource Anymore: Aligner Ensembling, Batch Filtering, and New Datasets for {B}engali-{E}nglish Machine Translation", author = "Hasan, Tahmid and Bhattacharjee, Abhik and Samin, Kazi and Hasan, Masum and Basak, Madhusudan and Rahman, M. Sohel and Shahriyar, Rifat", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-main.207", doi = "10.18653/v1/2020.emnlp-main.207", pages = "2612--2623", abstract = "Despite being the seventh most widely spoken language in the world, Bengali has received much less attention in machine translation literature due to being low in resources. Most publicly available parallel corpora for Bengali are not large enough; and have rather poor quality, mostly because of incorrect sentence alignments resulting from erroneous sentence segmentation, and also because of a high volume of noise present in them. In this work, we build a customized sentence segmenter for Bengali and propose two novel methods for parallel corpus creation on low-resource setups: aligner ensembling and batch filtering. With the segmenter and the two methods combined, we compile a high-quality Bengali-English parallel corpus comprising of 2.75 million sentence pairs, more than 2 million of which were not available before. Training on neural models, we achieve an improvement of more than 9 BLEU score over previous approaches to Bengali-English machine translation. We also evaluate on a new test set of 1000 pairs made with extensive quality control. We release the segmenter, parallel corpus, and the evaluation set, thus elevating Bengali from its low-resource status. To the best of our knowledge, this is the first ever large scale study on Bengali-English machine translation. We believe our study will pave the way for future research on Bengali-English machine translation as well as other low-resource languages. Our data and code are available at https://github.com/csebuetnlp/banglanmt.", } ```
{"language": ["bn"], "licenses": ["cc-by-nc-sa-4.0"]}
csebuetnlp/banglabert
null
[ "transformers", "pytorch", "electra", "pretraining", "bn", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "bn" ]
TAGS #transformers #pytorch #electra #pretraining #bn #endpoints_compatible #has_space #region-us
BanglaBERT ========== This repository contains the pretrained discriminator checkpoint of the model BanglaBERT. This is an ELECTRA discriminator model pretrained with the Replaced Token Detection (RTD) objective. Finetuned models using this checkpoint achieve state-of-the-art results on many of the NLP tasks in bengali. For finetuning on different downstream tasks such as 'Sentiment classification', 'Named Entity Recognition', 'Natural Language Inference' etc., refer to the scripts in the official GitHub repository. Note: This model was pretrained using a specific normalization pipeline available here. All finetuning scripts in the official GitHub repository uses this normalization by default. If you need to adapt the pretrained model for a different task make sure the text units are normalized using this pipeline before tokenizing to get best results. A basic example is given below: Using this model as a discriminator in 'transformers' (tested on 4.11.0.dev0) ----------------------------------------------------------------------------- Benchmarks ---------- * Zero-shot cross-lingual transfer-learning * Supervised fine-tuning The benchmarking datasets are as follows: * SC: Sentiment Classification * NER: Named Entity Recognition * NLI: Natural Language Inference * QA: Question Answering If you use this model, please cite the following paper: If you use the normalization module, please cite the following paper:
[]
[ "TAGS\n#transformers #pytorch #electra #pretraining #bn #endpoints_compatible #has_space #region-us \n" ]
summarization
transformers
# mT5-m2o-english-CrossSum This repository contains the many-to-one (m2o) mT5 checkpoint finetuned on all cross-lingual pairs of the [CrossSum](https://huggingface.co/datasets/csebuetnlp/CrossSum) dataset, where the target summary was in **english**, i.e. this model tries to **summarize text written in any language in English.** For finetuning details and scripts, see the [paper](https://arxiv.org/abs/2112.08804) and the [official repository](https://github.com/csebuetnlp/CrossSum). ## Using this model in `transformers` (tested on 4.11.0.dev0) ```python import re from transformers import AutoTokenizer, AutoModelForSeq2SeqLM WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip())) article_text = """Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs "spill over into misinformation about vaccines in general". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. "We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO," the post said, referring to the World Health Organization.""" model_name = "csebuetnlp/mT5_m2o_english_crossSum" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) input_ids = tokenizer( [WHITESPACE_HANDLER(article_text)], return_tensors="pt", padding="max_length", truncation=True, max_length=512 )["input_ids"] output_ids = model.generate( input_ids=input_ids, max_length=84, no_repeat_ngram_size=2, num_beams=4 )[0] summary = tokenizer.decode( output_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(summary) ``` ## Citation If you use this model, please cite the following paper: ``` @article{hasan2021crosssum, author = {Tahmid Hasan and Abhik Bhattacharjee and Wasi Uddin Ahmad and Yuan-Fang Li and Yong-bin Kang and Rifat Shahriyar}, title = {CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs}, journal = {CoRR}, volume = {abs/2112.08804}, year = {2021}, url = {https://arxiv.org/abs/2112.08804}, eprinttype = {arXiv}, eprint = {2112.08804} } ```
{"language": ["am", "ar", "az", "bn", "my", "zh", "en", "fr", "gu", "ha", "hi", "ig", "id", "ja", "rn", "ko", "ky", "mr", "ne", "om", "ps", "fa", "pcm", "pt", "pa", "ru", "gd", "sr", "si", "so", "es", "sw", "ta", "te", "th", "ti", "tr", "uk", "ur", "uz", "vi", "cy", "yo"], "tags": ["summarization", "mT5"], "licenses": ["cc-by-nc-sa-4.0"], "widget": [{"text": "Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs \"spill over into misinformation about vaccines in general\". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. \"We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO,\" the post said, referring to the World Health Organization."}]}
csebuetnlp/mT5_m2o_english_crossSum
null
[ "transformers", "pytorch", "mt5", "text2text-generation", "summarization", "mT5", "am", "ar", "az", "bn", "my", "zh", "en", "fr", "gu", "ha", "hi", "ig", "id", "ja", "rn", "ko", "ky", "mr", "ne", "om", "ps", "fa", "pcm", "pt", "pa", "ru", "gd", "sr", "si", "so", "es", "sw", "ta", "te", "th", "ti", "tr", "uk", "ur", "uz", "vi", "cy", "yo", "arxiv:2112.08804", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2112.08804" ]
[ "am", "ar", "az", "bn", "my", "zh", "en", "fr", "gu", "ha", "hi", "ig", "id", "ja", "rn", "ko", "ky", "mr", "ne", "om", "ps", "fa", "pcm", "pt", "pa", "ru", "gd", "sr", "si", "so", "es", "sw", "ta", "te", "th", "ti", "tr", "uk", "ur", "uz", "vi", "cy", "yo" ]
TAGS #transformers #pytorch #mt5 #text2text-generation #summarization #mT5 #am #ar #az #bn #my #zh #en #fr #gu #ha #hi #ig #id #ja #rn #ko #ky #mr #ne #om #ps #fa #pcm #pt #pa #ru #gd #sr #si #so #es #sw #ta #te #th #ti #tr #uk #ur #uz #vi #cy #yo #arxiv-2112.08804 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# mT5-m2o-english-CrossSum This repository contains the many-to-one (m2o) mT5 checkpoint finetuned on all cross-lingual pairs of the CrossSum dataset, where the target summary was in english, i.e. this model tries to summarize text written in any language in English. For finetuning details and scripts, see the paper and the official repository. ## Using this model in 'transformers' (tested on 4.11.0.dev0) If you use this model, please cite the following paper:
[ "# mT5-m2o-english-CrossSum\n\nThis repository contains the many-to-one (m2o) mT5 checkpoint finetuned on all cross-lingual pairs of the CrossSum dataset, where the target summary was in english, i.e. this model tries to summarize text written in any language in English. For finetuning details and scripts, see the paper and the official repository.", "## Using this model in 'transformers' (tested on 4.11.0.dev0)\n\n\n\n\n\n\nIf you use this model, please cite the following paper:" ]
[ "TAGS\n#transformers #pytorch #mt5 #text2text-generation #summarization #mT5 #am #ar #az #bn #my #zh #en #fr #gu #ha #hi #ig #id #ja #rn #ko #ky #mr #ne #om #ps #fa #pcm #pt #pa #ru #gd #sr #si #so #es #sw #ta #te #th #ti #tr #uk #ur #uz #vi #cy #yo #arxiv-2112.08804 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# mT5-m2o-english-CrossSum\n\nThis repository contains the many-to-one (m2o) mT5 checkpoint finetuned on all cross-lingual pairs of the CrossSum dataset, where the target summary was in english, i.e. this model tries to summarize text written in any language in English. For finetuning details and scripts, see the paper and the official repository.", "## Using this model in 'transformers' (tested on 4.11.0.dev0)\n\n\n\n\n\n\nIf you use this model, please cite the following paper:" ]
summarization
transformers
# mT5-multilingual-XLSum This repository contains the mT5 checkpoint finetuned on the 45 languages of [XL-Sum](https://huggingface.co/datasets/csebuetnlp/xlsum) dataset. For finetuning details and scripts, see the [paper](https://aclanthology.org/2021.findings-acl.413/) and the [official repository](https://github.com/csebuetnlp/xl-sum). ## Using this model in `transformers` (tested on 4.11.0.dev0) ```python import re from transformers import AutoTokenizer, AutoModelForSeq2SeqLM WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip())) article_text = """Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs "spill over into misinformation about vaccines in general". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. "We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO," the post said, referring to the World Health Organization.""" model_name = "csebuetnlp/mT5_multilingual_XLSum" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) input_ids = tokenizer( [WHITESPACE_HANDLER(article_text)], return_tensors="pt", padding="max_length", truncation=True, max_length=512 )["input_ids"] output_ids = model.generate( input_ids=input_ids, max_length=84, no_repeat_ngram_size=2, num_beams=4 )[0] summary = tokenizer.decode( output_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(summary) ``` ## Benchmarks Scores on the XL-Sum test sets are as follows: Language | ROUGE-1 / ROUGE-2 / ROUGE-L ---------|---------------------------- Amharic | 20.0485 / 7.4111 / 18.0753 Arabic | 34.9107 / 14.7937 / 29.1623 Azerbaijani | 21.4227 / 9.5214 / 19.3331 Bengali | 29.5653 / 12.1095 / 25.1315 Burmese | 15.9626 / 5.1477 / 14.1819 Chinese (Simplified) | 39.4071 / 17.7913 / 33.406 Chinese (Traditional) | 37.1866 / 17.1432 / 31.6184 English | 37.601 / 15.1536 / 29.8817 French | 35.3398 / 16.1739 / 28.2041 Gujarati | 21.9619 / 7.7417 / 19.86 Hausa | 39.4375 / 17.6786 / 31.6667 Hindi | 38.5882 / 16.8802 / 32.0132 Igbo | 31.6148 / 10.1605 / 24.5309 Indonesian | 37.0049 / 17.0181 / 30.7561 Japanese | 48.1544 / 23.8482 / 37.3636 Kirundi | 31.9907 / 14.3685 / 25.8305 Korean | 23.6745 / 11.4478 / 22.3619 Kyrgyz | 18.3751 / 7.9608 / 16.5033 Marathi | 22.0141 / 9.5439 / 19.9208 Nepali | 26.6547 / 10.2479 / 24.2847 Oromo | 18.7025 / 6.1694 / 16.1862 Pashto | 38.4743 / 15.5475 / 31.9065 Persian | 36.9425 / 16.1934 / 30.0701 Pidgin | 37.9574 / 15.1234 / 29.872 Portuguese | 37.1676 / 15.9022 / 28.5586 Punjabi | 30.6973 / 12.2058 / 25.515 Russian | 32.2164 / 13.6386 / 26.1689 Scottish Gaelic | 29.0231 / 10.9893 / 22.8814 Serbian (Cyrillic) | 23.7841 / 7.9816 / 20.1379 Serbian (Latin) | 21.6443 / 6.6573 / 18.2336 Sinhala | 27.2901 / 13.3815 / 23.4699 Somali | 31.5563 / 11.5818 / 24.2232 Spanish | 31.5071 / 11.8767 / 24.0746 Swahili | 37.6673 / 17.8534 / 30.9146 Tamil | 24.3326 / 11.0553 / 22.0741 Telugu | 19.8571 / 7.0337 / 17.6101 Thai | 37.3951 / 17.275 / 28.8796 Tigrinya | 25.321 / 8.0157 / 21.1729 Turkish | 32.9304 / 15.5709 / 29.2622 Ukrainian | 23.9908 / 10.1431 / 20.9199 Urdu | 39.5579 / 18.3733 / 32.8442 Uzbek | 16.8281 / 6.3406 / 15.4055 Vietnamese | 32.8826 / 16.2247 / 26.0844 Welsh | 32.6599 / 11.596 / 26.1164 Yoruba | 31.6595 / 11.6599 / 25.0898 ## Citation If you use this model, please cite the following paper: ``` @inproceedings{hasan-etal-2021-xl, title = "{XL}-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages", author = "Hasan, Tahmid and Bhattacharjee, Abhik and Islam, Md. Saiful and Mubasshir, Kazi and Li, Yuan-Fang and Kang, Yong-Bin and Rahman, M. Sohel and Shahriyar, Rifat", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.413", pages = "4693--4703", } ```
{"language": ["am", "ar", "az", "bn", "my", "zh", "en", "fr", "gu", "ha", "hi", "ig", "id", "ja", "rn", "ko", "ky", "mr", "ne", "om", "ps", "fa", "pcm", "pt", "pa", "ru", "gd", "sr", "si", "so", "es", "sw", "ta", "te", "th", "ti", "tr", "uk", "ur", "uz", "vi", "cy", "yo"], "tags": ["summarization", "mT5"], "datasets": ["csebuetnlp/xlsum"], "licenses": ["cc-by-nc-sa-4.0"], "widget": [{"text": "Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs \"spill over into misinformation about vaccines in general\". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. \"We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO,\" the post said, referring to the World Health Organization."}], "model-index": [{"name": "csebuetnlp/mT5_multilingual_XLSum", "results": [{"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "xsum", "type": "xsum", "config": "default", "split": "test"}, "metrics": [{"type": "rouge", "value": 36.5002, "name": "ROUGE-1", "verified": true}, {"type": "rouge", "value": 13.934, "name": "ROUGE-2", "verified": true}, {"type": "rouge", "value": 28.9876, "name": "ROUGE-L", "verified": true}, {"type": "rouge", "value": 28.9958, "name": "ROUGE-LSUM", "verified": true}, {"type": "loss", "value": 2.0674800872802734, "name": "loss", "verified": true}, {"type": "gen_len", "value": 26.9733, "name": "gen_len", "verified": true}]}]}]}
csebuetnlp/mT5_multilingual_XLSum
null
[ "transformers", "pytorch", "mt5", "text2text-generation", "summarization", "mT5", "am", "ar", "az", "bn", "my", "zh", "en", "fr", "gu", "ha", "hi", "ig", "id", "ja", "rn", "ko", "ky", "mr", "ne", "om", "ps", "fa", "pcm", "pt", "pa", "ru", "gd", "sr", "si", "so", "es", "sw", "ta", "te", "th", "ti", "tr", "uk", "ur", "uz", "vi", "cy", "yo", "dataset:csebuetnlp/xlsum", "model-index", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "am", "ar", "az", "bn", "my", "zh", "en", "fr", "gu", "ha", "hi", "ig", "id", "ja", "rn", "ko", "ky", "mr", "ne", "om", "ps", "fa", "pcm", "pt", "pa", "ru", "gd", "sr", "si", "so", "es", "sw", "ta", "te", "th", "ti", "tr", "uk", "ur", "uz", "vi", "cy", "yo" ]
TAGS #transformers #pytorch #mt5 #text2text-generation #summarization #mT5 #am #ar #az #bn #my #zh #en #fr #gu #ha #hi #ig #id #ja #rn #ko #ky #mr #ne #om #ps #fa #pcm #pt #pa #ru #gd #sr #si #so #es #sw #ta #te #th #ti #tr #uk #ur #uz #vi #cy #yo #dataset-csebuetnlp/xlsum #model-index #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
mT5-multilingual-XLSum ====================== This repository contains the mT5 checkpoint finetuned on the 45 languages of XL-Sum dataset. For finetuning details and scripts, see the paper and the official repository. Using this model in 'transformers' (tested on 4.11.0.dev0) ---------------------------------------------------------- Benchmarks ---------- Scores on the XL-Sum test sets are as follows: If you use this model, please cite the following paper:
[]
[ "TAGS\n#transformers #pytorch #mt5 #text2text-generation #summarization #mT5 #am #ar #az #bn #my #zh #en #fr #gu #ha #hi #ig #id #ja #rn #ko #ky #mr #ne #om #ps #fa #pcm #pt #pa #ru #gd #sr #si #so #es #sw #ta #te #th #ti #tr #uk #ur #uz #vi #cy #yo #dataset-csebuetnlp/xlsum #model-index #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n" ]
fill-mask
transformers
# FrALBERT Base Cased Pretrained model on French language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, unlike other ALBERT models, is cased: it does make a difference between french and French. ## Model description FrALBERT is a transformers model pretrained on 16Go of French Wikipedia in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Sentence Ordering Prediction (SOP): FrALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the FrALBERT model as inputs. FrALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the second version of the base model. This model has the following configuration: - 12 repeating layers - 128 embedding dimension - 768 hidden dimension - 12 attention heads - 11M parameters ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=fralbert-base-cased) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='cservan/fralbert-base-cased') >>> unmasker("Paris est la capitale de la [MASK] .") [ { "sequence": "paris est la capitale de la france.", "score": 0.6231236457824707, "token": 3043, "token_str": "france" }, { "sequence": "paris est la capitale de la region.", "score": 0.2993471622467041, "token": 10531, "token_str": "region" }, { "sequence": "paris est la capitale de la societe.", "score": 0.02028230018913746, "token": 24622, "token_str": "societe" }, { "sequence": "paris est la capitale de la bretagne.", "score": 0.012089950032532215, "token": 24987, "token_str": "bretagne" }, { "sequence": "paris est la capitale de la chine.", "score": 0.010002839379012585, "token": 14860, "token_str": "chine" } ] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AlbertTokenizer, AlbertModel tokenizer = AlbertTokenizer.from_pretrained('cservan/fralbert-base-cased') model = AlbertModel.from_pretrained("cservan/fralbert-base-cased") text = "Remplacez-moi par le texte en français que vous souhaitez." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import AlbertTokenizer, TFAlbertModel tokenizer = AlbertTokenizer.from_pretrained('cservan/fralbert-base-cased') model = TFAlbertModel.from_pretrained("cservan/fralbert-base-cased") text = "Remplacez-moi par le texte en français que vous souhaitez." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data The FrALBERT model was pretrained on 4go of [French Wikipedia](https://fr.wikipedia.org/wiki/French_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 32,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` ### Training The FrALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ## Evaluation results When fine-tuned on downstream tasks, the ALBERT models achieve the following results: Slot-filling: | | FrALBERT-base | FrALBERT-base-cased |----------------|---------------|-------------------- | MEDIA | 81.76 (0.59) | 85.09 (0.14) | ### BibTeX entry and citation info ```bibtex @inproceedings{cattan2021fralbert, author = {Oralie Cattan and Christophe Servan and Sophie Rosset}, booktitle = {Recent Advances in Natural Language Processing, RANLP 2021}, title = {{On the Usability of Transformers-based models for a French Question-Answering task}}, year = {2021}, address = {Online}, month = sep, } ``` Link to the paper: [PDF](https://hal.archives-ouvertes.fr/hal-03336060)
{"language": "fr", "license": "apache-2.0", "datasets": ["wikipedia"]}
cservan/fralbert-base-cased
null
[ "transformers", "pytorch", "albert", "fill-mask", "fr", "dataset:wikipedia", "arxiv:1909.11942", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1909.11942" ]
[ "fr" ]
TAGS #transformers #pytorch #albert #fill-mask #fr #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
FrALBERT Base Cased =================== Pretrained model on French language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model, unlike other ALBERT models, is cased: it does make a difference between french and French. Model description ----------------- FrALBERT is a transformers model pretrained on 16Go of French Wikipedia in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: * Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. * Sentence Ordering Prediction (SOP): FrALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the FrALBERT model as inputs. FrALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the second version of the base model. This model has the following configuration: * 12 repeating layers * 128 embedding dimension * 768 hidden dimension * 12 attention heads * 11M parameters Intended uses & limitations --------------------------- You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Training data ------------- The FrALBERT model was pretrained on 4go of French Wikipedia (excluding lists, tables and headers). Training procedure ------------------ ### Preprocessing The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 32,000. The inputs of the model are then of the form: ### Training The FrALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: * 15% of the tokens are masked. * In 80% of the cases, the masked tokens are replaced by '[MASK]'. * In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. * In the 10% remaining cases, the masked tokens are left as is. Evaluation results ------------------ When fine-tuned on downstream tasks, the ALBERT models achieve the following results: Slot-filling: FrALBERT-base: MEDIA, FrALBERT-base-cased: 81.76 (0.59) FrALBERT-base: , FrALBERT-base-cased: ### BibTeX entry and citation info Link to the paper: PDF
[ "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:\n\n\nTraining data\n-------------\n\n\nThe FrALBERT model was pretrained on 4go of French Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------", "### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 32,000. The inputs of the model are\nthen of the form:", "### Training\n\n\nThe FrALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:\n\n\nSlot-filling:\n\n\nFrALBERT-base: MEDIA, FrALBERT-base-cased: 81.76 (0.59)\nFrALBERT-base: , FrALBERT-base-cased:", "### BibTeX entry and citation info\n\n\nLink to the paper: PDF" ]
[ "TAGS\n#transformers #pytorch #albert #fill-mask #fr #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:\n\n\nTraining data\n-------------\n\n\nThe FrALBERT model was pretrained on 4go of French Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------", "### Preprocessing\n\n\nThe texts are lowercased and tokenized using SentencePiece and a vocabulary size of 32,000. The inputs of the model are\nthen of the form:", "### Training\n\n\nThe FrALBERT procedure follows the BERT setup.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, the ALBERT models achieve the following results:\n\n\nSlot-filling:\n\n\nFrALBERT-base: MEDIA, FrALBERT-base-cased: 81.76 (0.59)\nFrALBERT-base: , FrALBERT-base-cased:", "### BibTeX entry and citation info\n\n\nLink to the paper: PDF" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-1b-bemba-fds This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the [BembaSpeech](https://github.com/csikasote/BembaSpeech) dataset. It achieves the following results on the evaluation set: - Loss: 0.2898 - Wer: 0.3435 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.7986 | 0.34 | 500 | 0.4549 | 0.7292 | | 0.5358 | 0.67 | 1000 | 0.3325 | 0.4491 | | 0.4559 | 1.01 | 1500 | 0.3090 | 0.3954 | | 0.3983 | 1.35 | 2000 | 0.3067 | 0.4105 | | 0.4067 | 1.68 | 2500 | 0.2838 | 0.3678 | | 0.3722 | 2.02 | 3000 | 0.2824 | 0.3762 | | 0.3286 | 2.36 | 3500 | 0.2810 | 0.3670 | | 0.3239 | 2.69 | 4000 | 0.2643 | 0.3501 | | 0.3187 | 3.03 | 4500 | 0.2838 | 0.3754 | | 0.2801 | 3.36 | 5000 | 0.2815 | 0.3507 | | 0.2806 | 3.7 | 5500 | 0.2725 | 0.3486 | | 0.2714 | 4.04 | 6000 | 0.2898 | 0.3435 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_trainer", "bem", "robust-speech-event"], "model-index": [{"name": "wav2vec2-large-xls-r-1b-bemba-fds", "results": []}]}
csikasote/wav2vec2-large-xls-r-1b-bemba-fds
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "bem", "robust-speech-event", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #bem #robust-speech-event #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-large-xls-r-1b-bemba-fds ================================= This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the BembaSpeech dataset. It achieves the following results on the evaluation set: * Loss: 0.2898 * Wer: 0.3435 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 4 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 8 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 15 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #bem #robust-speech-event #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-bemba-fds This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the [BembaSpeech](https://github.com/csikasote/BembaSpeech) dataset. It achieves the following results on the evaluation set: - Loss: 0.3594 - Wer: 0.3838 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.9961 | 0.67 | 500 | 0.5157 | 0.7133 | | 0.5903 | 1.34 | 1000 | 0.3663 | 0.4989 | | 0.4804 | 2.02 | 1500 | 0.3547 | 0.4653 | | 0.4146 | 2.69 | 2000 | 0.3274 | 0.4345 | | 0.3792 | 3.36 | 2500 | 0.3586 | 0.4640 | | 0.3509 | 4.03 | 3000 | 0.3360 | 0.4316 | | 0.3114 | 4.7 | 3500 | 0.3382 | 0.4303 | | 0.2935 | 5.38 | 4000 | 0.3263 | 0.4091 | | 0.2723 | 6.05 | 4500 | 0.3348 | 0.4175 | | 0.2502 | 6.72 | 5000 | 0.3317 | 0.4147 | | 0.2334 | 7.39 | 5500 | 0.3542 | 0.4030 | | 0.2287 | 8.06 | 6000 | 0.3594 | 0.4067 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer", "bem", "robust-speech-event"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-bemba-fds", "results": []}]}
csikasote/wav2vec2-large-xls-r-300m-bemba-fds
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "bem", "robust-speech-event", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #bem #robust-speech-event #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-bemba-fds =================================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the BembaSpeech dataset. It achieves the following results on the evaluation set: * Loss: 0.3594 * Wer: 0.3838 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 30 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.0+cu111 * Datasets 1.13.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #bem #robust-speech-event #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Bemba Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Bemba language of Zambia using the [BembaSpeech](https://csikasote.github.io/BembaSpeech). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("csv", data_files={"test": "/content/test.csv"}, delimiter="\t")["test"] # Adapt the path to test.csv processor = Wav2Vec2Processor.from_pretrained("csikasote/wav2vec2-large-xlsr-bemba") model = Wav2Vec2ForCTC.from_pretrained("csikasote/wav2vec2-large-xlsr-bemba") #BembaSpeech is sample at 16kHz so we you do not need to resample #resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = speech_array.squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Bemba test data of BembaSpeech. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("csv", data_files={"test": "/content/test.csv"}, delimiter="\\t")["test"] wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("csikasote/wav2vec2-large-xlsr-bemba") model = Wav2Vec2ForCTC.from_pretrained("csikasote/wav2vec2-large-xlsr-bemba") model.to("cuda") chars_to_ignore_regex = '[\,\_\?\.\!\;\:\"\“]' #resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = speech_array.squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 42.17 % ## Training The BembaSpeech `train`, `dev` and `test` datasets were used for training, development and evaluation respectively. The script used for evaluating the model on the test dataset can be found [here](https://colab.research.google.com/drive/1aplFHfaXE68HGDwBYV2KqUWPasrk7bXv?usp=sharing).
{"language": "bem", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["BembaSpeech"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Bemba by Claytone Sikasote", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "BembaSpeech bem", "type": "bembaspeech", "args": "bem"}, "metrics": [{"type": "wer", "value": 42.17, "name": "Test WER"}]}]}]}
csikasote/wav2vec2-large-xlsr-bemba
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "bem", "dataset:BembaSpeech", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "bem" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #bem #dataset-BembaSpeech #license-apache-2.0 #model-index #endpoints_compatible #region-us
# Wav2Vec2-Large-XLSR-53-Bemba Fine-tuned facebook/wav2vec2-large-xlsr-53 on Bemba language of Zambia using the BembaSpeech. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Bemba test data of BembaSpeech. Test Result: 42.17 % ## Training The BembaSpeech 'train', 'dev' and 'test' datasets were used for training, development and evaluation respectively. The script used for evaluating the model on the test dataset can be found here.
[ "# Wav2Vec2-Large-XLSR-53-Bemba\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Bemba language of Zambia using the BembaSpeech. When using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Bemba test data of BembaSpeech. \n\n\n\n\nTest Result: 42.17 %", "## Training\n\nThe BembaSpeech 'train', 'dev' and 'test' datasets were used for training, development and evaluation respectively. The script used for evaluating the model on the test dataset can be found here." ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #bem #dataset-BembaSpeech #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# Wav2Vec2-Large-XLSR-53-Bemba\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Bemba language of Zambia using the BembaSpeech. When using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Bemba test data of BembaSpeech. \n\n\n\n\nTest Result: 42.17 %", "## Training\n\nThe BembaSpeech 'train', 'dev' and 'test' datasets were used for training, development and evaluation respectively. The script used for evaluating the model on the test dataset can be found here." ]
translation
transformers
### marianmt-th-zh_cn * source languages: th * target languages: zh_cn * dataset: * model: transformer-align * pre-processing: normalization + SentencePiece * test set translations: * test set scores: ## Training Training scripts from [LalitaDeelert/NLP-ZH_TH-Project](https://github.com/LalitaDeelert/NLP-ZH_TH-Project). Experiments tracked at [cstorm125/marianmt-th-zh_cn](https://wandb.ai/cstorm125/marianmt-th-zh_cn). ``` export WANDB_PROJECT=marianmt-th-zh_cn python train_model.py --input_fname ../data/v1/Train.csv \ --output_dir ../models/marianmt-th-zh_cn \ --source_lang th --target_lang zh \ --metric_tokenize zh --fp16 ``` ## Usage ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("cstorm125/marianmt-zh_cn-th") model = AutoModelForSeq2SeqLM.from_pretrained("cstorm125/marianmt-zh_cn-th").cpu() src_text = [ 'ฉันรักคุณ', 'ฉันอยากกินข้าว', ] translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) print([tokenizer.decode(t, skip_special_tokens=True) for t in translated]) > ['我爱你', '我想吃饭。'] ``` ## Requirements ``` transformers==4.6.0 torch==1.8.0 ```
{"tags": ["translation", "torch==1.8.0"], "widget": [{"text": "Inference Unavailable"}]}
cstorm125/marianmt-th-zh_cn
null
[ "transformers", "pytorch", "marian", "text2text-generation", "translation", "torch==1.8.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #marian #text2text-generation #translation #torch==1.8.0 #autotrain_compatible #endpoints_compatible #region-us
### marianmt-th-zh_cn * source languages: th * target languages: zh_cn * dataset: * model: transformer-align * pre-processing: normalization + SentencePiece * test set translations: * test set scores: ## Training Training scripts from LalitaDeelert/NLP-ZH_TH-Project. Experiments tracked at cstorm125/marianmt-th-zh_cn. ## Usage ## Requirements
[ "### marianmt-th-zh_cn\n* source languages: th\n* target languages: zh_cn\n* dataset: \n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* test set translations: \n* test set scores:", "## Training\n\nTraining scripts from LalitaDeelert/NLP-ZH_TH-Project. Experiments tracked at cstorm125/marianmt-th-zh_cn.", "## Usage", "## Requirements" ]
[ "TAGS\n#transformers #pytorch #marian #text2text-generation #translation #torch==1.8.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### marianmt-th-zh_cn\n* source languages: th\n* target languages: zh_cn\n* dataset: \n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* test set translations: \n* test set scores:", "## Training\n\nTraining scripts from LalitaDeelert/NLP-ZH_TH-Project. Experiments tracked at cstorm125/marianmt-th-zh_cn.", "## Usage", "## Requirements" ]
translation
transformers
### marianmt-zh_cn-th * source languages: zh_cn * target languages: th * dataset: * model: transformer-align * pre-processing: normalization + SentencePiece * test set translations: * test set scores: ## Training Training scripts from [LalitaDeelert/NLP-ZH_TH-Project](https://github.com/LalitaDeelert/NLP-ZH_TH-Project). Experiments tracked at [cstorm125/marianmt-zh_cn-th](https://wandb.ai/cstorm125/marianmt-zh_cn-th). ``` export WANDB_PROJECT=marianmt-zh_cn-th python train_model.py --input_fname ../data/v1/Train.csv \ \\t--output_dir ../models/marianmt-zh_cn-th \ \\t--source_lang zh --target_lang th \ \\t--metric_tokenize th_syllable --fp16 ``` ## Usage ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("cstorm125/marianmt-zh_cn-th") model = AutoModelForSeq2SeqLM.from_pretrained("cstorm125/marianmt-zh_cn-th").cpu() src_text = [ '我爱你', '我想吃米饭', ] translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) print([tokenizer.decode(t, skip_special_tokens=True) for t in translated]) > ['ผมรักคุณนะ', 'ฉันอยากกินข้าว'] ``` ## Requirements ``` transformers==4.6.0 torch==1.8.0 ```
{"tags": ["translation", "torch==1.8.0"], "widget": [{"text": "Inference Unavailable"}]}
cstorm125/marianmt-zh_cn-th
null
[ "transformers", "pytorch", "marian", "text2text-generation", "translation", "torch==1.8.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #marian #text2text-generation #translation #torch==1.8.0 #autotrain_compatible #endpoints_compatible #region-us
### marianmt-zh_cn-th * source languages: zh_cn * target languages: th * dataset: * model: transformer-align * pre-processing: normalization + SentencePiece * test set translations: * test set scores: ## Training Training scripts from LalitaDeelert/NLP-ZH_TH-Project. Experiments tracked at cstorm125/marianmt-zh_cn-th. ## Usage ## Requirements
[ "### marianmt-zh_cn-th\n* source languages: zh_cn\n* target languages: th\n* dataset: \n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* test set translations: \n* test set scores:", "## Training\n\nTraining scripts from LalitaDeelert/NLP-ZH_TH-Project. Experiments tracked at cstorm125/marianmt-zh_cn-th.", "## Usage", "## Requirements" ]
[ "TAGS\n#transformers #pytorch #marian #text2text-generation #translation #torch==1.8.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### marianmt-zh_cn-th\n* source languages: zh_cn\n* target languages: th\n* dataset: \n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* test set translations: \n* test set scores:", "## Training\n\nTraining scripts from LalitaDeelert/NLP-ZH_TH-Project. Experiments tracked at cstorm125/marianmt-zh_cn-th.", "## Usage", "## Requirements" ]
question-answering
transformers
# wangchan-deberta_v1-base-wiki-20210520-news-spm-finetune-qa Finetuning `airesearch/wangchan-deberta_v1-base-wiki-20210520-news-spm` with the training set of `iapp_wiki_qa_squad`, `thaiqa_squad`, and `nsc_qa` (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 `newmm` words). Benchmarks shared on [wandb](https://wandb.ai/cstorm125/wangchanberta-qa) using validation and test sets of `iapp_wiki_qa_squad`. Trained with [thai2transformers](https://github.com/vistec-AI/thai2transformers/blob/dev/scripts/downstream/train_question_answering_lm_finetuning.py). Run with: ``` export MODEL_NAME=wangchan-deberta_v1-base-wiki-20210520-news-spm CUDA_LAUNCH_BLOCKING=1 python train_question_answering_lm_finetuning.py \ --model_name $MODEL_NAME \ --dataset_name chimera_qa \ --revision mlm@ckp-41100 \ --output_dir $MODEL_NAME-finetune-chimera_qa-model \ --log_dir $MODEL_NAME-finetune-chimera_qa-log \ --model_max_length 400 \ --pad_on_right \ --fp16 \ --use_auth_token ```
{"widget": [{"text": "\u0e2a\u0e27\u0e19\u0e01\u0e38\u0e2b\u0e25\u0e32\u0e1a\u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2d\u0e30\u0e44\u0e23", "context": "\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2a\u0e27\u0e19\u0e01\u0e38\u0e2b\u0e25\u0e32\u0e1a\u0e27\u0e34\u0e17\u0e22\u0e32\u0e25\u0e31\u0e22 (Suankularb Wittayalai School) (\u0e2d\u0e31\u0e01\u0e29\u0e23\u0e22\u0e48\u0e2d : \u0e2a.\u0e01. / S.K.) \u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e0a\u0e32\u0e22\u0e25\u0e49\u0e27\u0e19 \u0e23\u0e30\u0e14\u0e31\u0e1a\u0e0a\u0e31\u0e49\u0e19\u0e21\u0e31\u0e18\u0e22\u0e21\u0e28\u0e36\u0e01\u0e29\u0e32\u0e02\u0e19\u0e32\u0e14\u0e43\u0e2b\u0e0d\u0e48\u0e1e\u0e34\u0e40\u0e28\u0e29 \u0e2a\u0e31\u0e07\u0e01\u0e31\u0e14\u0e2a\u0e33\u0e19\u0e31\u0e01\u0e07\u0e32\u0e19\u0e40\u0e02\u0e15\u0e1e\u0e37\u0e49\u0e19\u0e17\u0e35\u0e48\u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32\u0e21\u0e31\u0e18\u0e22\u0e21\u0e28\u0e36\u0e01\u0e29\u0e32\u0e40\u0e02\u0e15 1 \u0e2a\u0e33\u0e19\u0e31\u0e01\u0e07\u0e32\u0e19\u0e04\u0e13\u0e30\u0e01\u0e23\u0e23\u0e21\u0e01\u0e32\u0e23\u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32\u0e02\u0e31\u0e49\u0e19\u0e1e\u0e37\u0e49\u0e19\u0e10\u0e32\u0e19 (\u0e0a\u0e37\u0e48\u0e2d\u0e40\u0e14\u0e34\u0e21: \u0e01\u0e23\u0e21\u0e2a\u0e32\u0e21\u0e31\u0e0d\u0e28\u0e36\u0e01\u0e29\u0e32) \u0e01\u0e23\u0e30\u0e17\u0e23\u0e27\u0e07\u0e28\u0e36\u0e01\u0e29\u0e32\u0e18\u0e34\u0e01\u0e32\u0e23 \u0e01\u0e48\u0e2d\u0e15\u0e31\u0e49\u0e07\u0e42\u0e14\u0e22 \u0e1e\u0e23\u0e30\u0e1a\u0e32\u0e17\u0e2a\u0e21\u0e40\u0e14\u0e47\u0e08\u0e1e\u0e23\u0e30\u0e08\u0e38\u0e25\u0e08\u0e2d\u0e21\u0e40\u0e01\u0e25\u0e49\u0e32\u0e40\u0e08\u0e49\u0e32\u0e2d\u0e22\u0e39\u0e48\u0e2b\u0e31\u0e27 \u0e44\u0e14\u0e49\u0e23\u0e31\u0e1a\u0e01\u0e32\u0e23\u0e2a\u0e16\u0e32\u0e1b\u0e19\u0e32\u0e02\u0e36\u0e49\u0e19\u0e43\u0e19\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 8 \u0e21\u0e35\u0e19\u0e32\u0e04\u0e21 \u0e1e.\u0e28. 2424 (\u0e02\u0e13\u0e30\u0e19\u0e31\u0e49\u0e19\u0e19\u0e31\u0e1a\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 1 \u0e40\u0e21\u0e29\u0e32\u0e22\u0e19 \u0e40\u0e1b\u0e47\u0e19\u0e27\u0e31\u0e19\u0e02\u0e36\u0e49\u0e19\u0e1b\u0e35\u0e43\u0e2b\u0e21\u0e48 \u0e40\u0e21\u0e37\u0e48\u0e2d\u0e19\u0e31\u0e1a\u0e2d\u0e22\u0e48\u0e32\u0e07\u0e2a\u0e32\u0e01\u0e25\u0e16\u0e37\u0e2d\u0e40\u0e1b\u0e47\u0e19 \u0e1e.\u0e28. 2425) \u0e42\u0e14\u0e22\u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e23\u0e31\u0e10\u0e1a\u0e32\u0e25\u0e41\u0e2b\u0e48\u0e07\u0e41\u0e23\u0e01\u0e02\u0e2d\u0e07\u0e1b\u0e23\u0e30\u0e40\u0e17\u0e28\u0e44\u0e17\u0e22"}]}
cstorm125/wangchan-deberta_v1-base-wiki-20210520-news-spm-finetune-qa
null
[ "transformers", "pytorch", "deberta", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #deberta #question-answering #endpoints_compatible #region-us
# wangchan-deberta_v1-base-wiki-20210520-news-spm-finetune-qa Finetuning 'airesearch/wangchan-deberta_v1-base-wiki-20210520-news-spm' with the training set of 'iapp_wiki_qa_squad', 'thaiqa_squad', and 'nsc_qa' (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 'newmm' words). Benchmarks shared on wandb using validation and test sets of 'iapp_wiki_qa_squad'. Trained with thai2transformers. Run with:
[ "# wangchan-deberta_v1-base-wiki-20210520-news-spm-finetune-qa\n\nFinetuning 'airesearch/wangchan-deberta_v1-base-wiki-20210520-news-spm' with the training set of 'iapp_wiki_qa_squad', 'thaiqa_squad', and 'nsc_qa' (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 'newmm' words). Benchmarks shared on wandb using validation and test sets of 'iapp_wiki_qa_squad'.\nTrained with thai2transformers.\n\nRun with:" ]
[ "TAGS\n#transformers #pytorch #deberta #question-answering #endpoints_compatible #region-us \n", "# wangchan-deberta_v1-base-wiki-20210520-news-spm-finetune-qa\n\nFinetuning 'airesearch/wangchan-deberta_v1-base-wiki-20210520-news-spm' with the training set of 'iapp_wiki_qa_squad', 'thaiqa_squad', and 'nsc_qa' (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 'newmm' words). Benchmarks shared on wandb using validation and test sets of 'iapp_wiki_qa_squad'.\nTrained with thai2transformers.\n\nRun with:" ]
question-answering
transformers
# airesearch/wangchanberta-base-att-spm-uncased Finetuning `airesearch/wangchanberta-base-att-spm-uncased` with the training set of `iapp_wiki_qa_squad`, `thaiqa_squad`, and `nsc_qa` (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 `newmm` words). Benchmarks shared on [wandb](https://wandb.ai/cstorm125/wangchanberta-qa) using validation and test sets of `iapp_wiki_qa_squad`. Trained with [thai2transformers](https://github.com/vistec-AI/thai2transformers/blob/dev/scripts/downstream/train_question_answering_lm_finetuning.py). Run with: ``` export MODEL_NAME=airesearch/wangchanberta-base-att-spm-uncased python train_question_answering_lm_finetuning.py \ --model_name $MODEL_NAME \ --dataset_name chimera_qa \ --output_dir $MODEL_NAME-finetune-chimera_qa-model \ --log_dir $MODEL_NAME-finetune-chimera_qa-log \ --lowercase \ --pad_on_right \ --fp16 ```
{"widget": [{"text": "\u0e2a\u0e27\u0e19\u0e01\u0e38\u0e2b\u0e25\u0e32\u0e1a\u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2d\u0e30\u0e44\u0e23", "context": "\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2a\u0e27\u0e19\u0e01\u0e38\u0e2b\u0e25\u0e32\u0e1a\u0e27\u0e34\u0e17\u0e22\u0e32\u0e25\u0e31\u0e22 (Suankularb Wittayalai School) (\u0e2d\u0e31\u0e01\u0e29\u0e23\u0e22\u0e48\u0e2d : \u0e2a.\u0e01. / S.K.) \u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e0a\u0e32\u0e22\u0e25\u0e49\u0e27\u0e19 \u0e23\u0e30\u0e14\u0e31\u0e1a\u0e0a\u0e31\u0e49\u0e19\u0e21\u0e31\u0e18\u0e22\u0e21\u0e28\u0e36\u0e01\u0e29\u0e32\u0e02\u0e19\u0e32\u0e14\u0e43\u0e2b\u0e0d\u0e48\u0e1e\u0e34\u0e40\u0e28\u0e29 \u0e2a\u0e31\u0e07\u0e01\u0e31\u0e14\u0e2a\u0e33\u0e19\u0e31\u0e01\u0e07\u0e32\u0e19\u0e40\u0e02\u0e15\u0e1e\u0e37\u0e49\u0e19\u0e17\u0e35\u0e48\u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32\u0e21\u0e31\u0e18\u0e22\u0e21\u0e28\u0e36\u0e01\u0e29\u0e32\u0e40\u0e02\u0e15 1 \u0e2a\u0e33\u0e19\u0e31\u0e01\u0e07\u0e32\u0e19\u0e04\u0e13\u0e30\u0e01\u0e23\u0e23\u0e21\u0e01\u0e32\u0e23\u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32\u0e02\u0e31\u0e49\u0e19\u0e1e\u0e37\u0e49\u0e19\u0e10\u0e32\u0e19 (\u0e0a\u0e37\u0e48\u0e2d\u0e40\u0e14\u0e34\u0e21: \u0e01\u0e23\u0e21\u0e2a\u0e32\u0e21\u0e31\u0e0d\u0e28\u0e36\u0e01\u0e29\u0e32) \u0e01\u0e23\u0e30\u0e17\u0e23\u0e27\u0e07\u0e28\u0e36\u0e01\u0e29\u0e32\u0e18\u0e34\u0e01\u0e32\u0e23 \u0e01\u0e48\u0e2d\u0e15\u0e31\u0e49\u0e07\u0e42\u0e14\u0e22 \u0e1e\u0e23\u0e30\u0e1a\u0e32\u0e17\u0e2a\u0e21\u0e40\u0e14\u0e47\u0e08\u0e1e\u0e23\u0e30\u0e08\u0e38\u0e25\u0e08\u0e2d\u0e21\u0e40\u0e01\u0e25\u0e49\u0e32\u0e40\u0e08\u0e49\u0e32\u0e2d\u0e22\u0e39\u0e48\u0e2b\u0e31\u0e27 \u0e44\u0e14\u0e49\u0e23\u0e31\u0e1a\u0e01\u0e32\u0e23\u0e2a\u0e16\u0e32\u0e1b\u0e19\u0e32\u0e02\u0e36\u0e49\u0e19\u0e43\u0e19\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 8 \u0e21\u0e35\u0e19\u0e32\u0e04\u0e21 \u0e1e.\u0e28. 2424 (\u0e02\u0e13\u0e30\u0e19\u0e31\u0e49\u0e19\u0e19\u0e31\u0e1a\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 1 \u0e40\u0e21\u0e29\u0e32\u0e22\u0e19 \u0e40\u0e1b\u0e47\u0e19\u0e27\u0e31\u0e19\u0e02\u0e36\u0e49\u0e19\u0e1b\u0e35\u0e43\u0e2b\u0e21\u0e48 \u0e40\u0e21\u0e37\u0e48\u0e2d\u0e19\u0e31\u0e1a\u0e2d\u0e22\u0e48\u0e32\u0e07\u0e2a\u0e32\u0e01\u0e25\u0e16\u0e37\u0e2d\u0e40\u0e1b\u0e47\u0e19 \u0e1e.\u0e28. 2425) \u0e42\u0e14\u0e22\u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e23\u0e31\u0e10\u0e1a\u0e32\u0e25\u0e41\u0e2b\u0e48\u0e07\u0e41\u0e23\u0e01\u0e02\u0e2d\u0e07\u0e1b\u0e23\u0e30\u0e40\u0e17\u0e28\u0e44\u0e17\u0e22"}]}
cstorm125/wangchanberta-base-att-spm-uncased-finetune-qa
null
[ "transformers", "pytorch", "camembert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #camembert #question-answering #endpoints_compatible #region-us
# airesearch/wangchanberta-base-att-spm-uncased Finetuning 'airesearch/wangchanberta-base-att-spm-uncased' with the training set of 'iapp_wiki_qa_squad', 'thaiqa_squad', and 'nsc_qa' (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 'newmm' words). Benchmarks shared on wandb using validation and test sets of 'iapp_wiki_qa_squad'. Trained with thai2transformers. Run with:
[ "# airesearch/wangchanberta-base-att-spm-uncased\n\nFinetuning 'airesearch/wangchanberta-base-att-spm-uncased' with the training set of 'iapp_wiki_qa_squad', 'thaiqa_squad', and 'nsc_qa' (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 'newmm' words). Benchmarks shared on wandb using validation and test sets of 'iapp_wiki_qa_squad'.\nTrained with thai2transformers.\n\nRun with:" ]
[ "TAGS\n#transformers #pytorch #camembert #question-answering #endpoints_compatible #region-us \n", "# airesearch/wangchanberta-base-att-spm-uncased\n\nFinetuning 'airesearch/wangchanberta-base-att-spm-uncased' with the training set of 'iapp_wiki_qa_squad', 'thaiqa_squad', and 'nsc_qa' (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 'newmm' words). Benchmarks shared on wandb using validation and test sets of 'iapp_wiki_qa_squad'.\nTrained with thai2transformers.\n\nRun with:" ]
question-answering
transformers
# wangchanberta-base-wiki-20210520-news-spm-finetune-qa Finetuning `airesearchth/wangchanberta-base-wiki-20210520-news-spm` with the training set of `iapp_wiki_qa_squad`, `thaiqa_squad`, and `nsc_qa` (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 `newmm` words). Benchmarks shared on [wandb](https://wandb.ai/cstorm125/wangchanberta-qa) using validation and test sets of `iapp_wiki_qa_squad`. Trained with [thai2transformers](https://github.com/vistec-AI/thai2transformers/blob/dev/scripts/downstream/train_question_answering_lm_finetuning.py). Run with: ``` export MODEL_NAME=airesearchth/wangchanberta-base-wiki-20210520-news-spm CUDA_LAUNCH_BLOCKING=1 python train_question_answering_lm_finetuning.py \ --model_name $MODEL_NAME \ --dataset_name chimera_qa \ --output_dir $MODEL_NAME-finetune-chimera_qa-model \ --log_dir $MODEL_NAME-finetune-chimera_qa-log \ --model_max_length 400 \ --pad_on_right \ --fp16 ```
{"widget": [{"text": "\u0e2a\u0e27\u0e19\u0e01\u0e38\u0e2b\u0e25\u0e32\u0e1a\u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2d\u0e30\u0e44\u0e23", "context": "\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2a\u0e27\u0e19\u0e01\u0e38\u0e2b\u0e25\u0e32\u0e1a\u0e27\u0e34\u0e17\u0e22\u0e32\u0e25\u0e31\u0e22 (Suankularb Wittayalai School) (\u0e2d\u0e31\u0e01\u0e29\u0e23\u0e22\u0e48\u0e2d : \u0e2a.\u0e01. / S.K.) \u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e0a\u0e32\u0e22\u0e25\u0e49\u0e27\u0e19 \u0e23\u0e30\u0e14\u0e31\u0e1a\u0e0a\u0e31\u0e49\u0e19\u0e21\u0e31\u0e18\u0e22\u0e21\u0e28\u0e36\u0e01\u0e29\u0e32\u0e02\u0e19\u0e32\u0e14\u0e43\u0e2b\u0e0d\u0e48\u0e1e\u0e34\u0e40\u0e28\u0e29 \u0e2a\u0e31\u0e07\u0e01\u0e31\u0e14\u0e2a\u0e33\u0e19\u0e31\u0e01\u0e07\u0e32\u0e19\u0e40\u0e02\u0e15\u0e1e\u0e37\u0e49\u0e19\u0e17\u0e35\u0e48\u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32\u0e21\u0e31\u0e18\u0e22\u0e21\u0e28\u0e36\u0e01\u0e29\u0e32\u0e40\u0e02\u0e15 1 \u0e2a\u0e33\u0e19\u0e31\u0e01\u0e07\u0e32\u0e19\u0e04\u0e13\u0e30\u0e01\u0e23\u0e23\u0e21\u0e01\u0e32\u0e23\u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32\u0e02\u0e31\u0e49\u0e19\u0e1e\u0e37\u0e49\u0e19\u0e10\u0e32\u0e19 (\u0e0a\u0e37\u0e48\u0e2d\u0e40\u0e14\u0e34\u0e21: \u0e01\u0e23\u0e21\u0e2a\u0e32\u0e21\u0e31\u0e0d\u0e28\u0e36\u0e01\u0e29\u0e32) \u0e01\u0e23\u0e30\u0e17\u0e23\u0e27\u0e07\u0e28\u0e36\u0e01\u0e29\u0e32\u0e18\u0e34\u0e01\u0e32\u0e23 \u0e01\u0e48\u0e2d\u0e15\u0e31\u0e49\u0e07\u0e42\u0e14\u0e22 \u0e1e\u0e23\u0e30\u0e1a\u0e32\u0e17\u0e2a\u0e21\u0e40\u0e14\u0e47\u0e08\u0e1e\u0e23\u0e30\u0e08\u0e38\u0e25\u0e08\u0e2d\u0e21\u0e40\u0e01\u0e25\u0e49\u0e32\u0e40\u0e08\u0e49\u0e32\u0e2d\u0e22\u0e39\u0e48\u0e2b\u0e31\u0e27 \u0e44\u0e14\u0e49\u0e23\u0e31\u0e1a\u0e01\u0e32\u0e23\u0e2a\u0e16\u0e32\u0e1b\u0e19\u0e32\u0e02\u0e36\u0e49\u0e19\u0e43\u0e19\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 8 \u0e21\u0e35\u0e19\u0e32\u0e04\u0e21 \u0e1e.\u0e28. 2424 (\u0e02\u0e13\u0e30\u0e19\u0e31\u0e49\u0e19\u0e19\u0e31\u0e1a\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 1 \u0e40\u0e21\u0e29\u0e32\u0e22\u0e19 \u0e40\u0e1b\u0e47\u0e19\u0e27\u0e31\u0e19\u0e02\u0e36\u0e49\u0e19\u0e1b\u0e35\u0e43\u0e2b\u0e21\u0e48 \u0e40\u0e21\u0e37\u0e48\u0e2d\u0e19\u0e31\u0e1a\u0e2d\u0e22\u0e48\u0e32\u0e07\u0e2a\u0e32\u0e01\u0e25\u0e16\u0e37\u0e2d\u0e40\u0e1b\u0e47\u0e19 \u0e1e.\u0e28. 2425) \u0e42\u0e14\u0e22\u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e23\u0e31\u0e10\u0e1a\u0e32\u0e25\u0e41\u0e2b\u0e48\u0e07\u0e41\u0e23\u0e01\u0e02\u0e2d\u0e07\u0e1b\u0e23\u0e30\u0e40\u0e17\u0e28\u0e44\u0e17\u0e22"}]}
cstorm125/wangchanberta-base-wiki-20210520-news-spm-finetune-qa
null
[ "transformers", "pytorch", "camembert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #camembert #question-answering #endpoints_compatible #region-us
# wangchanberta-base-wiki-20210520-news-spm-finetune-qa Finetuning 'airesearchth/wangchanberta-base-wiki-20210520-news-spm' with the training set of 'iapp_wiki_qa_squad', 'thaiqa_squad', and 'nsc_qa' (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 'newmm' words). Benchmarks shared on wandb using validation and test sets of 'iapp_wiki_qa_squad'. Trained with thai2transformers. Run with:
[ "# wangchanberta-base-wiki-20210520-news-spm-finetune-qa\n\nFinetuning 'airesearchth/wangchanberta-base-wiki-20210520-news-spm' with the training set of 'iapp_wiki_qa_squad', 'thaiqa_squad', and 'nsc_qa' (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 'newmm' words). Benchmarks shared on wandb using validation and test sets of 'iapp_wiki_qa_squad'.\nTrained with thai2transformers.\n\nRun with:" ]
[ "TAGS\n#transformers #pytorch #camembert #question-answering #endpoints_compatible #region-us \n", "# wangchanberta-base-wiki-20210520-news-spm-finetune-qa\n\nFinetuning 'airesearchth/wangchanberta-base-wiki-20210520-news-spm' with the training set of 'iapp_wiki_qa_squad', 'thaiqa_squad', and 'nsc_qa' (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 'newmm' words). Benchmarks shared on wandb using validation and test sets of 'iapp_wiki_qa_squad'.\nTrained with thai2transformers.\n\nRun with:" ]
question-answering
transformers
# wangchanberta-base-wiki-20210520-news-spm_span-mask-finetune-qa Finetuning `airesearch/wangchanberta-base-wiki-20210520-news-spm_span-mask` with the training set of `iapp_wiki_qa_squad`, `thaiqa_squad`, and `nsc_qa` (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 `newmm` words). Benchmarks shared on [wandb](https://wandb.ai/cstorm125/wangchanberta-qa) using validation and test sets of `iapp_wiki_qa_squad`. Trained with [thai2transformers](https://github.com/vistec-AI/thai2transformers/blob/dev/scripts/downstream/train_question_answering_lm_finetuning.py). Run with: ``` export MODEL_NAME=airesearch/wangchanberta-base-wiki-20210520-news-spm_span-mask CUDA_LAUNCH_BLOCKING=1 python train_question_answering_lm_finetuning.py \ --model_name $MODEL_NAME \ --dataset_name chimera_qa \ --output_dir $MODEL_NAME-finetune-chimera_qa-model \ --log_dir $MODEL_NAME-finetune-chimera_qa-log \ --model_max_length 400 \ --pad_on_right \ --fp16 \ --use_auth_token ```
{"widget": [{"text": "\u0e2a\u0e27\u0e19\u0e01\u0e38\u0e2b\u0e25\u0e32\u0e1a\u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2d\u0e30\u0e44\u0e23", "context": "\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2a\u0e27\u0e19\u0e01\u0e38\u0e2b\u0e25\u0e32\u0e1a\u0e27\u0e34\u0e17\u0e22\u0e32\u0e25\u0e31\u0e22 (Suankularb Wittayalai School) (\u0e2d\u0e31\u0e01\u0e29\u0e23\u0e22\u0e48\u0e2d : \u0e2a.\u0e01. / S.K.) \u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e0a\u0e32\u0e22\u0e25\u0e49\u0e27\u0e19 \u0e23\u0e30\u0e14\u0e31\u0e1a\u0e0a\u0e31\u0e49\u0e19\u0e21\u0e31\u0e18\u0e22\u0e21\u0e28\u0e36\u0e01\u0e29\u0e32\u0e02\u0e19\u0e32\u0e14\u0e43\u0e2b\u0e0d\u0e48\u0e1e\u0e34\u0e40\u0e28\u0e29 \u0e2a\u0e31\u0e07\u0e01\u0e31\u0e14\u0e2a\u0e33\u0e19\u0e31\u0e01\u0e07\u0e32\u0e19\u0e40\u0e02\u0e15\u0e1e\u0e37\u0e49\u0e19\u0e17\u0e35\u0e48\u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32\u0e21\u0e31\u0e18\u0e22\u0e21\u0e28\u0e36\u0e01\u0e29\u0e32\u0e40\u0e02\u0e15 1 \u0e2a\u0e33\u0e19\u0e31\u0e01\u0e07\u0e32\u0e19\u0e04\u0e13\u0e30\u0e01\u0e23\u0e23\u0e21\u0e01\u0e32\u0e23\u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32\u0e02\u0e31\u0e49\u0e19\u0e1e\u0e37\u0e49\u0e19\u0e10\u0e32\u0e19 (\u0e0a\u0e37\u0e48\u0e2d\u0e40\u0e14\u0e34\u0e21: \u0e01\u0e23\u0e21\u0e2a\u0e32\u0e21\u0e31\u0e0d\u0e28\u0e36\u0e01\u0e29\u0e32) \u0e01\u0e23\u0e30\u0e17\u0e23\u0e27\u0e07\u0e28\u0e36\u0e01\u0e29\u0e32\u0e18\u0e34\u0e01\u0e32\u0e23 \u0e01\u0e48\u0e2d\u0e15\u0e31\u0e49\u0e07\u0e42\u0e14\u0e22 \u0e1e\u0e23\u0e30\u0e1a\u0e32\u0e17\u0e2a\u0e21\u0e40\u0e14\u0e47\u0e08\u0e1e\u0e23\u0e30\u0e08\u0e38\u0e25\u0e08\u0e2d\u0e21\u0e40\u0e01\u0e25\u0e49\u0e32\u0e40\u0e08\u0e49\u0e32\u0e2d\u0e22\u0e39\u0e48\u0e2b\u0e31\u0e27 \u0e44\u0e14\u0e49\u0e23\u0e31\u0e1a\u0e01\u0e32\u0e23\u0e2a\u0e16\u0e32\u0e1b\u0e19\u0e32\u0e02\u0e36\u0e49\u0e19\u0e43\u0e19\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 8 \u0e21\u0e35\u0e19\u0e32\u0e04\u0e21 \u0e1e.\u0e28. 2424 (\u0e02\u0e13\u0e30\u0e19\u0e31\u0e49\u0e19\u0e19\u0e31\u0e1a\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 1 \u0e40\u0e21\u0e29\u0e32\u0e22\u0e19 \u0e40\u0e1b\u0e47\u0e19\u0e27\u0e31\u0e19\u0e02\u0e36\u0e49\u0e19\u0e1b\u0e35\u0e43\u0e2b\u0e21\u0e48 \u0e40\u0e21\u0e37\u0e48\u0e2d\u0e19\u0e31\u0e1a\u0e2d\u0e22\u0e48\u0e32\u0e07\u0e2a\u0e32\u0e01\u0e25\u0e16\u0e37\u0e2d\u0e40\u0e1b\u0e47\u0e19 \u0e1e.\u0e28. 2425) \u0e42\u0e14\u0e22\u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e23\u0e31\u0e10\u0e1a\u0e32\u0e25\u0e41\u0e2b\u0e48\u0e07\u0e41\u0e23\u0e01\u0e02\u0e2d\u0e07\u0e1b\u0e23\u0e30\u0e40\u0e17\u0e28\u0e44\u0e17\u0e22"}]}
cstorm125/wangchanberta-base-wiki-20210520-news-spm_span-mask-finetune-qa
null
[ "transformers", "pytorch", "camembert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #camembert #question-answering #endpoints_compatible #region-us
# wangchanberta-base-wiki-20210520-news-spm_span-mask-finetune-qa Finetuning 'airesearch/wangchanberta-base-wiki-20210520-news-spm_span-mask' with the training set of 'iapp_wiki_qa_squad', 'thaiqa_squad', and 'nsc_qa' (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 'newmm' words). Benchmarks shared on wandb using validation and test sets of 'iapp_wiki_qa_squad'. Trained with thai2transformers. Run with:
[ "# wangchanberta-base-wiki-20210520-news-spm_span-mask-finetune-qa\n\nFinetuning 'airesearch/wangchanberta-base-wiki-20210520-news-spm_span-mask' with the training set of 'iapp_wiki_qa_squad', 'thaiqa_squad', and 'nsc_qa' (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 'newmm' words). Benchmarks shared on wandb using validation and test sets of 'iapp_wiki_qa_squad'.\nTrained with thai2transformers.\n\nRun with:" ]
[ "TAGS\n#transformers #pytorch #camembert #question-answering #endpoints_compatible #region-us \n", "# wangchanberta-base-wiki-20210520-news-spm_span-mask-finetune-qa\n\nFinetuning 'airesearch/wangchanberta-base-wiki-20210520-news-spm_span-mask' with the training set of 'iapp_wiki_qa_squad', 'thaiqa_squad', and 'nsc_qa' (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 'newmm' words). Benchmarks shared on wandb using validation and test sets of 'iapp_wiki_qa_squad'.\nTrained with thai2transformers.\n\nRun with:" ]
null
k2
# Introduction This repo contains pre-trained model using <https://github.com/k2-fsa/icefall/pull/219>. It is trained on [AIShell](https://www.openslr.org/33/) dataset using modified transducer from [optimized_transducer](https://github.com/csukuangfj/optimized_transducer). Also, it uses [aidatatang_200zh](http://www.openslr.org/62/) as extra training data. ## How to clone this repo ``` sudo apt-get install git-lfs git clone https://huggingface.co/csukuangfj/icefall-aishell-transducer-stateless-modified-2-2022-03-01 cd icefall-aishell-transducer-stateless-modified-2-2022-03-01 git lfs pull ``` **Catuion**: You have to run `git lfs pull`. Otherwise, you will be SAD later. The model in this repo is trained using the commit `TODO`. You can use ``` git clone https://github.com/k2-fsa/icefall cd icefall git checkout TODO ``` to download `icefall`. You can find the model information by visiting <https://github.com/k2-fsa/icefall/blob/TODO/egs/aishell/ASR/transducer_stateless_modified-2/train.py#L232>. In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward; the decoder contains a 512-dim embedding layer and a Conv1d with kernel size 2. The decoder architecture is modified from [Rnn-Transducer with Stateless Prediction Network](https://ieeexplore.ieee.org/document/9054419). A Conv1d layer is placed right after the input embedding layer. ----- ## Description This repo provides pre-trained transducer Conformer model for the AIShell dataset using [icefall][icefall]. There are no RNNs in the decoder. The decoder is stateless and contains only an embedding layer and a Conv1d. The commands for training are: ```bash cd egs/aishell/ASR ./prepare.sh --stop-stage 6 ./prepare_aidatatang_200zh.sh export CUDA_VISIBLE_DEVICES="0,1,2" ./transducer_stateless_modified-2/train.py \ --world-size 3 \ --num-epochs 90 \ --start-epoch 0 \ --exp-dir transducer_stateless_modified-2/exp-2 \ --max-duration 250 \ --lr-factor 2.0 \ --context-size 2 \ --modified-transducer-prob 0.25 \ --datatang-prob 0.2 ``` The tensorboard training log can be found at <https://tensorboard.dev/experiment/oG72ZlWaSGua6fXkcGRRjA/> The commands for decoding are ```bash # greedy search for epoch in 89; do for avg in 38; do ./transducer_stateless_modified-2/decode.py \ --epoch $epoch \ --avg $avg \ --exp-dir transducer_stateless_modified-2/exp-2 \ --max-duration 100 \ --context-size 2 \ --decoding-method greedy_search \ --max-sym-per-frame 1 done done # modified beam search for epoch in 89; do for avg in 38; do ./transducer_stateless_modified-2/decode.py \ --epoch $epoch \ --avg $avg \ --exp-dir transducer_stateless_modified-2/exp-2 \ --max-duration 100 \ --context-size 2 \ --decoding-method modified_beam_search \ --beam-size 4 done done ``` You can find the decoding log for the above command in this repo (in the folder [log][log]). The WER for the test dataset is | | test |comment | |------------------------|------|----------------------------------------------------------------| | greedy search | 4.94 |--epoch 89, --avg 38, --max-duration 100, --max-sym-per-frame 1 | | modified beam search | 4.68 |--epoch 89, --avg 38, --max-duration 100 --beam-size 4 | # File description - [log][log], this directory contains the decoding log and decoding results - [test_wavs][test_wavs], this directory contains wave files for testing the pre-trained model - [data][data], this directory contains files generated by [prepare.sh][prepare] - [exp][exp], this directory contains only one file: `preprained.pt` `exp/pretrained.pt` is generated by the following command: ```bash epoch=89 avg=38 ./transducer_stateless_modified-2/export.py \ --exp-dir ./transducer_stateless_modified-2/exp-2 \ --lang-dir ./data/lang_char \ --epoch $epoch \ --avg $avg ``` **HINT**: To use `pretrained.pt` to compute the WER for the `test` dataset, just do the following: ```bash cp icefall-aishell-transducer-stateless-modified-2-2022-03-01/exp/pretrained.pt \ /path/to/icefall/egs/aishell/ASR/transducer_stateless_modified-2/exp/epoch-999.pt ``` and pass `--epoch 999 --avg 1` to `transducer_stateless_modified-2/decode.py`. [icefall]: https://github.com/k2-fsa/icefall [prepare]: https://github.com/k2-fsa/icefall/blob/master/egs/aishell/ASR/prepare.sh [exp]: https://huggingface.co/csukuangfj/icefall-aishell-transducer-stateless-modified-2-2022-03-01/tree/main/exp [data]: https://huggingface.co/csukuangfj/icefall-aishell-transducer-stateless-modified-2-2022-03-01/tree/main/data [test_wavs]: https://huggingface.co/csukuangfj/icefall-aishell-transducer-stateless-modified-2-2022-03-01/tree/main/test_wavs [log]: https://huggingface.co/csukuangfj/icefall-aishell-transducer-stateless-modified-2-2022-03-01/tree/main/log [icefall]: https://github.com/k2-fsa/icefall
{"language": "en", "license": "apache-2.0", "tags": ["icefall", "k2", "transducer", "aishell", "ASR", "stateless transducer", "PyTorch"], "datasets": ["aishell", "aidatatang_200zh"], "metrics": ["WER"]}
csukuangfj/icefall-aishell-transducer-stateless-modified-2-2022-03-01
null
[ "k2", "icefall", "transducer", "aishell", "ASR", "stateless transducer", "PyTorch", "en", "dataset:aishell", "dataset:aidatatang_200zh", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #k2 #icefall #transducer #aishell #ASR #stateless transducer #PyTorch #en #dataset-aishell #dataset-aidatatang_200zh #license-apache-2.0 #region-us
Introduction ============ This repo contains pre-trained model using <URL It is trained on AIShell dataset using modified transducer from optimized\_transducer. Also, it uses aidatatang\_200zh as extra training data. How to clone this repo ---------------------- Catuion: You have to run 'git lfs pull'. Otherwise, you will be SAD later. The model in this repo is trained using the commit 'TODO'. You can use to download 'icefall'. You can find the model information by visiting <URL In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward; the decoder contains a 512-dim embedding layer and a Conv1d with kernel size 2. The decoder architecture is modified from Rnn-Transducer with Stateless Prediction Network. A Conv1d layer is placed right after the input embedding layer. --- Description ----------- This repo provides pre-trained transducer Conformer model for the AIShell dataset using [icefall](URL). There are no RNNs in the decoder. The decoder is stateless and contains only an embedding layer and a Conv1d. The commands for training are: The tensorboard training log can be found at <URL The commands for decoding are You can find the decoding log for the above command in this repo (in the folder [log](URL)). The WER for the test dataset is test: greedy search, comment: 4.94 test: modified beam search, comment: 4.68 File description ================ * [log](URL), this directory contains the decoding log and decoding results * [test\_wavs](URL), this directory contains wave files for testing the pre-trained model * [data](URL), this directory contains files generated by <URL> * [exp](URL), this directory contains only one file: 'URL' 'exp/URL' is generated by the following command: HINT: To use 'URL' to compute the WER for the 'test' dataset, just do the following: and pass '--epoch 999 --avg 1' to 'transducer\_stateless\_modified-2/URL'.
[]
[ "TAGS\n#k2 #icefall #transducer #aishell #ASR #stateless transducer #PyTorch #en #dataset-aishell #dataset-aidatatang_200zh #license-apache-2.0 #region-us \n" ]
null
k2
# Introduction This repo contains pre-trained model using <https://github.com/k2-fsa/icefall/pull/219>. It is trained on [AIShell](https://www.openslr.org/33/) dataset using modified transducer from [optimized_transducer](https://github.com/csukuangfj/optimized_transducer). ## How to clone this repo ``` sudo apt-get install git-lfs git clone https://huggingface.co/csukuangfj/icefall-aishell-transducer-stateless-modified-2022-03-01 cd icefall-aishell-transducer-stateless-modified-2022-03-01 git lfs pull ``` **Catuion**: You have to run `git lfs pull`. Otherwise, you will be SAD later. The model in this repo is trained using the commit `TODO`. You can use ``` git clone https://github.com/k2-fsa/icefall cd icefall git checkout TODO ``` to download `icefall`. You can find the model information by visiting <https://github.com/k2-fsa/icefall/blob/TODO/egs/aishell/ASR/transducer_stateless_modified/train.py#L232>. In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward; the decoder contains a 512-dim embedding layer and a Conv1d with kernel size 2. The decoder architecture is modified from [Rnn-Transducer with Stateless Prediction Network](https://ieeexplore.ieee.org/document/9054419). A Conv1d layer is placed right after the input embedding layer. ----- ## Description This repo provides pre-trained transducer Conformer model for the AIShell dataset using [icefall][icefall]. There are no RNNs in the decoder. The decoder is stateless and contains only an embedding layer and a Conv1d. The commands for training are: ```bash cd egs/aishell/ASR ./prepare.sh --stop-stage 6 export CUDA_VISIBLE_DEVICES="0,1,2" ./transducer_stateless_modified/train.py \ --world-size 3 \ --num-epochs 90 \ --start-epoch 0 \ --exp-dir transducer_stateless_modified/exp-4 \ --max-duration 250 \ --lr-factor 2.0 \ --context-size 2 \ --modified-transducer-prob 0.25 ``` The tensorboard training log can be found at <https://tensorboard.dev/experiment/C27M8YxRQCa1t2XglTqlWg> The commands for decoding are ```bash # greedy search for epoch in 64; do for avg in 33; do ./transducer_stateless_modified-2/decode.py \ --epoch $epoch \ --avg $avg \ --exp-dir transducer_stateless_modified/exp-4 \ --max-duration 100 \ --context-size 2 \ --decoding-method greedy_search \ --max-sym-per-frame 1 done done # modified beam search for epoch in 64; do for avg in 33; do ./transducer_stateless_modified/decode.py \ --epoch $epoch \ --avg $avg \ --exp-dir transducer_stateless_modified/exp-4 \ --max-duration 100 \ --context-size 2 \ --decoding-method modified_beam_search \ --beam-size 4 done done ``` You can find the decoding log for the above command in this repo (in the folder [log][log]). The WER for the test dataset is | | test |comment | |------------------------|------|----------------------------------------------------------------| | greedy search | 5.22 |--epoch 64, --avg 33, --max-duration 100, --max-sym-per-frame 1 | | modified beam search | 5.02 |--epoch 64, --avg 33, --max-duration 100 --beam-size 4 | # File description - [log][log], this directory contains the decoding log and decoding results - [test_wavs][test_wavs], this directory contains wave files for testing the pre-trained model - [data][data], this directory contains files generated by [prepare.sh][prepare] - [exp][exp], this directory contains only one file: `preprained.pt` `exp/pretrained.pt` is generated by the following command: ```bash epoch=64 avg=33 ./transducer_stateless_modified/export.py \ --exp-dir ./transducer_stateless_modified/exp-4 \ --lang-dir ./data/lang_char \ --epoch $epoch \ --avg $avg ``` **HINT**: To use `pretrained.pt` to compute the WER for the `test` dataset, just do the following: ```bash cp icefall-aishell-transducer-stateless-modified-2022-03-01/exp/pretrained.pt \ /path/to/icefall/egs/aishell/ASR/transducer_stateless_modified/exp/epoch-999.pt ``` and pass `--epoch 999 --avg 1` to `transducer_stateless_modified/decode.py`. [icefall]: https://github.com/k2-fsa/icefall [prepare]: https://github.com/k2-fsa/icefall/blob/master/egs/aishell/ASR/prepare.sh [exp]: https://huggingface.co/csukuangfj/icefall-aishell-transducer-stateless-modified-2022-03-01/tree/main/exp [data]: https://huggingface.co/csukuangfj/icefall-aishell-transducer-stateless-modified-2022-03-01/tree/main/data [test_wavs]: https://huggingface.co/csukuangfj/icefall-aishell-transducer-stateless-modified-2022-03-01/tree/main/test_wavs [log]: https://huggingface.co/csukuangfj/icefall-aishell-transducer-stateless-modified-2022-03-01/tree/main/log [icefall]: https://github.com/k2-fsa/icefall
{"language": "en", "license": "apache-2.0", "tags": ["icefall", "k2", "transducer", "aishell", "ASR", "stateless transducer", "PyTorch"], "datasets": ["aishell"], "metrics": ["WER"]}
csukuangfj/icefall-aishell-transducer-stateless-modified-2022-03-01
null
[ "k2", "icefall", "transducer", "aishell", "ASR", "stateless transducer", "PyTorch", "en", "dataset:aishell", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #k2 #icefall #transducer #aishell #ASR #stateless transducer #PyTorch #en #dataset-aishell #license-apache-2.0 #region-us
Introduction ============ This repo contains pre-trained model using <URL It is trained on AIShell dataset using modified transducer from optimized\_transducer. How to clone this repo ---------------------- Catuion: You have to run 'git lfs pull'. Otherwise, you will be SAD later. The model in this repo is trained using the commit 'TODO'. You can use to download 'icefall'. You can find the model information by visiting <URL In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward; the decoder contains a 512-dim embedding layer and a Conv1d with kernel size 2. The decoder architecture is modified from Rnn-Transducer with Stateless Prediction Network. A Conv1d layer is placed right after the input embedding layer. --- Description ----------- This repo provides pre-trained transducer Conformer model for the AIShell dataset using [icefall](URL). There are no RNNs in the decoder. The decoder is stateless and contains only an embedding layer and a Conv1d. The commands for training are: The tensorboard training log can be found at <URL The commands for decoding are You can find the decoding log for the above command in this repo (in the folder [log](URL)). The WER for the test dataset is test: greedy search, comment: 5.22 test: modified beam search, comment: 5.02 File description ================ * [log](URL), this directory contains the decoding log and decoding results * [test\_wavs](URL), this directory contains wave files for testing the pre-trained model * [data](URL), this directory contains files generated by <URL> * [exp](URL), this directory contains only one file: 'URL' 'exp/URL' is generated by the following command: HINT: To use 'URL' to compute the WER for the 'test' dataset, just do the following: and pass '--epoch 999 --avg 1' to 'transducer\_stateless\_modified/URL'.
[]
[ "TAGS\n#k2 #icefall #transducer #aishell #ASR #stateless transducer #PyTorch #en #dataset-aishell #license-apache-2.0 #region-us \n" ]
null
null
# Introduction This repo contains pre-trained model using <https://github.com/k2-fsa/icefall/pull/213>. It is trained on train-clean-100 subset of the LibriSpeech dataset. Also, it uses the `S` subset from GigaSpeech as extra training data. ## How to clone this repo ``` sudo apt-get install git-lfs git clone https://huggingface.co/csukuangfj/icefall-asr-librispeech-100h-transducer-stateless-multi-datasets-bpe-500-2022-02-21 cd icefall-asr-librispeech-100h-transducer-stateless-multi-datasets-bpe-500-2022-02-21 git lfs pull ``` **Catuion**: You have to run `git lfs pull`. Otherwise, you will be SAD later. The model in this repo is trained using the commit `2332ba312d7ce72f08c7bac1e3312f7e3dd722dc`. You can use ``` git clone https://github.com/k2-fsa/icefall cd icefall git checkout 2332ba312d7ce72f08c7bac1e3312f7e3dd722dc ``` to download `icefall`. You can find the model information by visiting <https://github.com/k2-fsa/icefall/blob/2332ba312d7ce72f08c7bac1e3312f7e3dd722dc/egs/librispeech/ASR/transducer_stateless_multi_datasets/train.py#L198>. In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward; the decoder contains a 1024-dim embedding layer and a Conv1d with kernel size 2. The decoder architecture is modified from [Rnn-Transducer with Stateless Prediction Network](https://ieeexplore.ieee.org/document/9054419). A Conv1d layer is placed right after the input embedding layer. ----- ## Description This repo provides pre-trained transducer Conformer model for the LibriSpeech dataset using [icefall][icefall]. There are no RNNs in the decoder. The decoder is stateless and contains only an embedding layer and a Conv1d. The commands for training are: ``` cd egs/librispeech/ASR/ ./prepare.sh ./prepare_giga_speech.sh export CUDA_VISIBLE_DEVICES="0,1" ./transducer_stateless_multi_datasets/train.py \ --world-size 2 \ --num-epochs 60 \ --start-epoch 0 \ --exp-dir transducer_stateless_multi_datasets/exp-100-2 \ --full-libri 0 \ --max-duration 300 \ --lr-factor 1 \ --bpe-model data/lang_bpe_500/bpe.model \ --modified-transducer-prob 0.25 --giga-prob 0.2 ``` The tensorboard training log can be found at <https://tensorboard.dev/experiment/qUEKzMnrTZmOz1EXPda9RA/> The command for decoding is: ``` epoch=57 avg=17 ## greedy search for epoch in 57; do for avg in 17; do for sym in 1 2 3; do ./transducer_stateless_multi_datasets/decode.py \ --epoch $epoch \ --avg $avg \ --exp-dir transducer_stateless_multi_datasets/exp-100-2 \ --bpe-model ./data/lang_bpe_500/bpe.model \ --max-duration 100 \ --context-size 2 \ --max-sym-per-frame $sym done done done ## modified beam search epoch=57 avg=17 ./transducer_stateless_multi_datasets/decode.py \ --epoch $epoch \ --avg $avg \ --exp-dir transducer_stateless_multi_datasets/exp-100-2 \ --bpe-model ./data/lang_bpe_500/bpe.model \ --max-duration 100 \ --context-size 2 \ --decoding-method modified_beam_search \ --beam-size 4 ``` You can find the decoding log for the above command in this repo (in the folder `log`). The WERs for the test datasets are | | test-clean | test-other | comment | |-------------------------------------|------------|------------|------------------------------------------| | greedy search (max sym per frame 1) | 6.34 | 16.7 | --epoch 57, --avg 17, --max-duration 100 | | greedy search (max sym per frame 2) | 6.34 | 16.7 | --epoch 57, --avg 17, --max-duration 100 | | greedy search (max sym per frame 3) | 6.34 | 16.7 | --epoch 57, --avg 17, --max-duration 100 | | modified beam search (beam size 4) | 6.31 | 16.3 | --epoch 57, --avg 17, --max-duration 100 | # File description - [log][log], this directory contains the decoding log and decoding results - [test_wavs][test_wavs], this directory contains wave files for testing the pre-trained model - [data][data], this directory contains files generated by [prepare.sh][prepare] - [exp][exp], this directory contains only one file: `preprained.pt` `exp/pretrained.pt` is generated by the following command: ```bash ./transducer_stateless_multi_datasets/export.py \ --epoch 57 \ --avg 17 \ --bpe-model data/lang_bpe_500/bpe.model \ --exp-dir transducer_stateless_multi_datasets/exp-full ``` **HINT**: To use `pretrained.pt` to compute the WER for test-clean and test-other, just do the following: ``` cp icefall-asr-librispeech-100h-transducer-stateless-multi-datasets-bpe-500-2022-02-21/exp/pretrained.pt \ /path/to/icefall/egs/librispeech/ASR/transducer_stateless_multi_datasets/exp/epoch-999.pt ``` and pass `--epoch 999 --avg 1` to `transducer_stateless_multi_datasets/decode.py`. [icefall]: https://github.com/k2-fsa/icefall [prepare]: https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/prepare.sh [exp]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-100h-transducer-stateless-multi-datasets-bpe-500-2022-02-21/tree/main/exp [data]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-100h-transducer-stateless-multi-datasets-bpe-500-2022-02-21/tree/main/data [test_wavs]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-100h-transducer-stateless-multi-datasets-bpe-500-2022-02-21/tree/main/test_wavs [log]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-100h-transducer-stateless-multi-datasets-bpe-500-2022-02-21/tree/main/log [icefall]: https://github.com/k2-fsa/icefall
{}
csukuangfj/icefall-asr-librispeech-100h-transducer-stateless-multi-datasets-bpe-500-2022-02-21
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #region-us
Introduction ============ This repo contains pre-trained model using <URL It is trained on train-clean-100 subset of the LibriSpeech dataset. Also, it uses the 'S' subset from GigaSpeech as extra training data. How to clone this repo ---------------------- Catuion: You have to run 'git lfs pull'. Otherwise, you will be SAD later. The model in this repo is trained using the commit '2332ba312d7ce72f08c7bac1e3312f7e3dd722dc'. You can use to download 'icefall'. You can find the model information by visiting <URL In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward; the decoder contains a 1024-dim embedding layer and a Conv1d with kernel size 2. The decoder architecture is modified from Rnn-Transducer with Stateless Prediction Network. A Conv1d layer is placed right after the input embedding layer. --- Description ----------- This repo provides pre-trained transducer Conformer model for the LibriSpeech dataset using [icefall](URL). There are no RNNs in the decoder. The decoder is stateless and contains only an embedding layer and a Conv1d. The commands for training are: The tensorboard training log can be found at <URL The command for decoding is: You can find the decoding log for the above command in this repo (in the folder 'log'). The WERs for the test datasets are File description ================ * [log](URL), this directory contains the decoding log and decoding results * [test\_wavs](URL), this directory contains wave files for testing the pre-trained model * [data](URL), this directory contains files generated by <URL> * [exp](URL), this directory contains only one file: 'URL' 'exp/URL' is generated by the following command: HINT: To use 'URL' to compute the WER for test-clean and test-other, just do the following: and pass '--epoch 999 --avg 1' to 'transducer\_stateless\_multi\_datasets/URL'.
[]
[ "TAGS\n#region-us \n" ]
null
null
# Introduction ## How to clone this repo ``` sudo apt-get install git-lfs git clone https://huggingface.co/csukuangfj/icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09 cd icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09 git lfs pull ``` **Catuion**: You have to run `git lfs pull`. Otherwise, you will be SAD later. ----- ## Description This repo provides pre-trained conformer CTC model for the librispeech dataset using [icefall][icefall]. The commands for training are: ``` cd egs/librispeech/ASR/conformer_ctc ./prepare.sh export CUDA_VISIBLE_DEVICES="0,1,2,3" ./conformer_ctc/train.py \ --exp-dir conformer_ctc/exp_500_att0.8 \ --lang-dir data/lang_bpe_500 \ --att-rate 0.8 \ --full-libri 1 \ --max-duration 200 \ --concatenate-cuts 0 \ --world-size 4 \ --bucketing-sampler 1 \ --start-epoch 0 \ --num-epochs 90 ``` The command for decoding is: ``` ./conformer_ctc/decode.py \ --exp-dir conformer_ctc/exp_500_att0.8 \ --lang-dir data/lang_bpe_500 \ --max-duration 30 \ --concatenate-cuts 0 \ --bucketing-sampler 1 \ --num-paths 1000 \ --epoch 77 \ --avg 55 \ --method attention-decoder \ --nbest-scale 0.5 ``` You can find the decoding log for the above command in this repo: [log/log-decode-2021-11-09-17-38-28](log/log-decode-2021-11-09-17-38-28). The best WER for the librispeech test dataset is: | | test-clean | test-other | |-----|------------|------------| | WER | 2.42 | 5.73 | Scale values used in n-gram LM rescoring and attention rescoring for the best WERs are: | ngram_lm_scale | attention_scale | |----------------|-----------------| | 2.0 | 2.0 | # File description - [log][log], this directory contains the decoding log - [test_wavs][test_wavs], this directory contains wave files for testing the pre-trained model - [data][data], this directory contains files generated by [prepare.sh][prepare] Note: For the `data/lm` directory, we provide only `G_4_gram.pt`. If you need other files in this directory, please run [prepare.sh][prepare]. - [exp][exp], this directory contains two files: `preprained.pt` and `cpu_jit.pt`. `exp/pretrained.pt` is generated by the following command: ``` ./conformer_ctc/export.py \ --epoch 77 \ --avg 55 \ --jit 0 \ --lang-dir data/lang_bpe_500 \ --exp-dir conformer_ctc/exp_500_att0.8 ``` **HINT**: To use `pre-trained.pt` to compute the WER for test-clean and test-other, just do the following: ``` cp icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/exp/pretrained.pt \ /path/to/icefall/egs/librispeech/ASR/conformer_ctc/exp/epoch-999.pt ``` and pass `--epoch 999 --avg 1` to `conformer_ctc/decode.py`. `exp/cpu_jit.pt` is generated by the following command: ``` ./conformer_ctc/export.py \ --epoch 77 \ --avg 55 \ --jit 1 \ --lang-dir data/lang_bpe_500 \ --exp-dir conformer_ctc/exp_500_att0.8 ``` # Deploy your model in C++ using k2 To deploy your model in C++ using k2 without depending on Python, do the following: ``` # Note: It requires torch >= 1.8.0 git clone https://github.com/k2-fsa/k2 cd k2 git checkout v2.0-pre mkdir build_release cd build_release cmake -DCMAKE_BUILD_TYPE=Release .. make -j ctc_decode hlg_decode ngram_lm_rescore attention_rescore ``` ## CTC decoding ``` cd k2/build_release ./bin/ctc_decode \ --use_gpu true \ --nn_model ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/exp/cpu_jit.pt \ --bpe_model ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/data/lang_bpe_500/bpe.model \ ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/test_wavs/1089-134686-0001.wav \ ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/test_wavs/1221-135766-0001.wav \ ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/test_wavs/1221-135766-0002.wav ``` ## HLG decoding ``` ./bin/hlg_decode \ --use_gpu true \ --nn_model ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/exp/cpu_jit.pt \ --hlg ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/data/lang_bpe_500/HLG.pt \ --word_table ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/data/lang_bpe_500/words.txt \ ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/test_wavs/1089-134686-0001.wav \ ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/test_wavs/1221-135766-0001.wav \ ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/test_wavs/1221-135766-0002.wav ``` ## HLG decoding + n-gram LM rescoring **NOTE**: V100 GPU with 16 GB RAM is known NOT to work because of OOM. V100 GPU with 32 GB RAM is known to work. ``` ./bin/ngram_lm_rescore \ --use_gpu true \ --nn_model ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/exp/cpu_jit.pt \ --hlg ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/data/lang_bpe_500/HLG.pt \ --g ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/data/lm/G_4_gram.pt \ --ngram_lm_scale 1.0 \ --word_table ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/data/lang_bpe_500/words.txt \ ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/test_wavs/1089-134686-0001.wav \ ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/test_wavs/1221-135766-0001.wav \ ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/test_wavs/1221-135766-0002.wav ``` ## HLG decoding + n-gram LM rescoring + attention decoder rescoring **NOTE**: V100 GPU with 16 GB RAM is known NOT to work because of OOM. V100 GPU with 32 GB RAM is known to work. ``` ./bin/attention_rescore \ --use_gpu true \ --nn_model ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/exp/cpu_jit.pt \ --hlg ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/data/lang_bpe_500/HLG.pt \ --g ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/data/lm/G_4_gram.pt \ --ngram_lm_scale 2.0 \ --attention_scale 2.0 \ --num_paths 100 \ --nbest_scale 0.5 \ --word_table ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/data/lang_bpe_500/words.txt \ --sos_id 1 \ --eos_id 1 \ ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/test_wavs/1089-134686-0001.wav \ ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/test_wavs/1221-135766-0001.wav \ ./icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/test_wavs/1221-135766-0002.wav ``` [prepare]: https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/prepare.sh [exp]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/tree/main/exp [data]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/tree/main/data [test_wavs]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/tree/main/test_wavs [log]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09/tree/main/log [icefall]: https://github.com/k2-fsa/icefall
{}
csukuangfj/icefall-asr-librispeech-conformer-ctc-jit-bpe-500-2021-11-09
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #region-us
Introduction ============ How to clone this repo ---------------------- Catuion: You have to run 'git lfs pull'. Otherwise, you will be SAD later. --- Description ----------- This repo provides pre-trained conformer CTC model for the librispeech dataset using [icefall](URL). The commands for training are: The command for decoding is: You can find the decoding log for the above command in this repo: log/log-decode-2021-11-09-17-38-28. The best WER for the librispeech test dataset is: test-clean: WER, test-other: 2.42 Scale values used in n-gram LM rescoring and attention rescoring for the best WERs are: File description ================ * [log](URL), this directory contains the decoding log * [test\_wavs](URL), this directory contains wave files for testing the pre-trained model * [data](URL), this directory contains files generated by <URL> Note: For the 'data/lm' directory, we provide only 'G\_4\_gram.pt'. If you need other files in this directory, please run <URL>. * [exp](URL), this directory contains two files: 'URL' and 'cpu\_jit.pt'. 'exp/URL' is generated by the following command: HINT: To use 'URL' to compute the WER for test-clean and test-other, just do the following: and pass '--epoch 999 --avg 1' to 'conformer\_ctc/URL'. 'exp/cpu\_jit.pt' is generated by the following command: Deploy your model in C++ using k2 ================================= To deploy your model in C++ using k2 without depending on Python, do the following: CTC decoding ------------ HLG decoding ------------ HLG decoding + n-gram LM rescoring ---------------------------------- NOTE: V100 GPU with 16 GB RAM is known NOT to work because of OOM. V100 GPU with 32 GB RAM is known to work. HLG decoding + n-gram LM rescoring + attention decoder rescoring ---------------------------------------------------------------- NOTE: V100 GPU with 16 GB RAM is known NOT to work because of OOM. V100 GPU with 32 GB RAM is known to work.
[]
[ "TAGS\n#region-us \n" ]
null
null
# Introduction ## How to clone this repo ``` sudo apt-get install git-lfs git clone https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-17 cd icefall-asr-librispeech-transducer-bpe-500-2021-12-17 git lfs pull ``` **Catuion**: You have to run `git lfs pull`. Otherwise, you will be SAD later. The model in this repo is trained using the commit `cb04c8a7509425ab45fae888b0ca71bbbd23f0de`. You can use ``` git clone https://github.com/k2-fsa/icefall cd icefall git checkout cb04c8a7509425ab45fae888b0ca71bbbd23f0de ``` to download `icefall`. You can find the model information by visiting <https://github.com/k2-fsa/icefall/blob/cb04c8a7509425ab45fae888b0ca71bbbd23f0de/egs/librispeech/ASR/transducer/train.py#L196> In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward; the decoder contains a 1024-dim embedding layer, plus a 4-layer LSTM with hidden size 512. ----- ## Description This repo provides pre-trained RNN-T Conformer model for the librispeech dataset using [icefall][icefall]. The commands for training are: ``` cd egs/librispeech/ASR/ ./prepare.sh export CUDA_VISIBLE_DEVICES="0,1,2,3" ./transducer/train.py \ --world-size 4 \ --num-epochs 30 \ --start-epoch 0 \ --exp-dir transducer/exp-lr-2.5-full \ --full-libri 1 \ --max-duration 250 \ --lr-factor 2.5 ``` The command for decoding is: ``` epoch=26 avg=12 ./transducer/decode.py \ --epoch $epoch \ --avg $avg \ --exp-dir transducer/exp-lr-2.5-full \ --bpe-model ./data/lang_bpe_500/bpe.model \ --max-duration 100 ``` You can find the decoding log for the above command in this repo: [log/log-decode-epoch-26-avg-12-2021-12-17-09-33-04](log/log-decode-epoch-26-avg-12-2021-12-17-09-33-04). The best WER using greedy search is: | | test-clean | test-other | |-----|------------|------------| | WER | 3.16 | 7.71 | # File description - [log][log], this directory contains the decoding log and decoding results - [test_wavs][test_wavs], this directory contains wave files for testing the pre-trained model - [data][data], this directory contains files generated by [prepare.sh][prepare] - [exp][exp], this directory contains only one file: `preprained.pt` `exp/pretrained.pt` is generated by the following command: ``` ./transducer/export.py \ --epoch 26 \ --avg 12 \ --bpe-model data/lang_bpe_500/bpe.model \ --exp-dir transducer/exp-lr-2.5-full ``` **HINT**: To use `pre-trained.pt` to compute the WER for test-clean and test-other, just do the following: ``` cp icefall-asr-librispeech-transducer-bpe-500-2021-12-17/exp/pretrained.pt \ /path/to/icefall/egs/librispeech/ASR/transducer/exp/epoch-999.pt ``` and pass `--epoch 999 --avg 1` to `transducer/decode.py`. [icefall]: https://github.com/k2-fsa/icefall [prepare]: https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/prepare.sh [exp]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-17/tree/main/exp [data]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-17/tree/main/data [test_wavs]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-17/tree/main/test_wavs [log]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-17/tree/main/log [icefall]: https://github.com/k2-fsa/icefall
{}
csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-17
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #region-us
Introduction ============ How to clone this repo ---------------------- Catuion: You have to run 'git lfs pull'. Otherwise, you will be SAD later. The model in this repo is trained using the commit 'cb04c8a7509425ab45fae888b0ca71bbbd23f0de'. You can use to download 'icefall'. You can find the model information by visiting <URL In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward; the decoder contains a 1024-dim embedding layer, plus a 4-layer LSTM with hidden size 512. --- Description ----------- This repo provides pre-trained RNN-T Conformer model for the librispeech dataset using [icefall](URL). The commands for training are: The command for decoding is: You can find the decoding log for the above command in this repo: log/log-decode-epoch-26-avg-12-2021-12-17-09-33-04. The best WER using greedy search is: test-clean: WER, test-other: 3.16 File description ================ * [log](URL), this directory contains the decoding log and decoding results * [test\_wavs](URL), this directory contains wave files for testing the pre-trained model * [data](URL), this directory contains files generated by <URL> * [exp](URL), this directory contains only one file: 'URL' 'exp/URL' is generated by the following command: HINT: To use 'URL' to compute the WER for test-clean and test-other, just do the following: and pass '--epoch 999 --avg 1' to 'transducer/URL'.
[]
[ "TAGS\n#region-us \n" ]
null
null
# Introduction ## How to clone this repo ``` sudo apt-get install git-lfs git clone https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-23 cd icefall-asr-librispeech-transducer-bpe-500-2021-12-23 git lfs pull ``` **Catuion**: You have to run `git lfs pull`. Otherwise, you will be SAD later. The model in this repo is trained using the commit `5b6699a8354b70b23b252b371c612a35ed186ec2`. You can use ``` git clone https://github.com/k2-fsa/icefall cd icefall git checkout 5b6699a8354b70b23b252b371c612a35ed186ec2 ``` to download `icefall`. You can find the model information by visiting <https://github.com/k2-fsa/icefall/blob/5b6699a8354b70b23b252b371c612a35ed186ec2/egs/librispeech/ASR/transducer/train.py#L191> In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward; the decoder contains a 1024-dim embedding layer, plus a 2-layer LSTM with hidden size 512. ----- ## Description This repo provides pre-trained RNN-T Conformer model for the librispeech dataset using [icefall][icefall]. The commands for training are: ``` cd egs/librispeech/ASR/ ./prepare.sh export CUDA_VISIBLE_DEVICES="0,1,2,3" ./transducer/train.py \ --world-size 4 \ --num-epochs 35 \ --start-epoch 0 \ --exp-dir transducer/exp-lr-2.5-full \ --full-libri 1 \ --max-duration 180 \ --lr-factor 2.5 ``` The command for decoding is: ``` epoch=34 avg=11 ./transducer/decode.py \ --epoch $epoch \ --avg $avg \ --exp-dir transducer/exp-lr-2.5-full \ --bpe-model ./data/lang_bpe_500/bpe.model \ --max-duration 100 ``` You can find the decoding log for the above command in the `log` folder of this repo. The best WER using greedy search is: | | test-clean | test-other | |-----|------------|------------| | WER | 3.07 | 7.51 | # File description - [log][log], this directory contains the decoding log and decoding results - [test_wavs][test_wavs], this directory contains wave files for testing the pre-trained model - [data][data], this directory contains files generated by [prepare.sh][prepare] - [exp][exp], this directory contains only one file: `preprained.pt` `exp/pretrained.pt` is generated by the following command: ``` ./transducer/export.py \ --epoch 34 \ --avg 11 \ --bpe-model data/lang_bpe_500/bpe.model \ --exp-dir transducer/exp-lr-2.5-full ``` **HINT**: To use `pre-trained.pt` to compute the WER for test-clean and test-other, just do the following: ``` cp icefall-asr-librispeech-transducer-bpe-500-2021-12-23/exp/pretrained.pt \ /path/to/icefall/egs/librispeech/ASR/transducer/exp/epoch-999.pt ``` and pass `--epoch 999 --avg 1` to `transducer/decode.py`. [icefall]: https://github.com/k2-fsa/icefall [prepare]: https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/prepare.sh [exp]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-23/tree/main/exp [data]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-23/tree/main/data [test_wavs]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-23/tree/main/test_wavs [log]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-23/tree/main/log [icefall]: https://github.com/k2-fsa/icefall
{}
csukuangfj/icefall-asr-librispeech-transducer-bpe-500-2021-12-23
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #region-us
Introduction ============ How to clone this repo ---------------------- Catuion: You have to run 'git lfs pull'. Otherwise, you will be SAD later. The model in this repo is trained using the commit '5b6699a8354b70b23b252b371c612a35ed186ec2'. You can use to download 'icefall'. You can find the model information by visiting <URL In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward; the decoder contains a 1024-dim embedding layer, plus a 2-layer LSTM with hidden size 512. --- Description ----------- This repo provides pre-trained RNN-T Conformer model for the librispeech dataset using [icefall](URL). The commands for training are: The command for decoding is: You can find the decoding log for the above command in the 'log' folder of this repo. The best WER using greedy search is: test-clean: WER, test-other: 3.07 File description ================ * [log](URL), this directory contains the decoding log and decoding results * [test\_wavs](URL), this directory contains wave files for testing the pre-trained model * [data](URL), this directory contains files generated by <URL> * [exp](URL), this directory contains only one file: 'URL' 'exp/URL' is generated by the following command: HINT: To use 'URL' to compute the WER for test-clean and test-other, just do the following: and pass '--epoch 999 --avg 1' to 'transducer/URL'.
[]
[ "TAGS\n#region-us \n" ]
null
null
# Introduction ## How to clone this repo ``` sudo apt-get install git-lfs git clone https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-22 cd icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-22 git lfs pull ``` **Catuion**: You have to run `git lfs pull`. Otherwise, you will be SAD later. The model in this repo is trained using the commit `fb6a57e9e01dd8aae2af2a6b4568daad8bc8ab32`. You can use ``` git clone https://github.com/k2-fsa/icefall cd icefall git checkout fb6a57e9e01dd8aae2af2a6b4568daad8bc8ab32 ``` to download `icefall`. You can find the model information by visiting <https://github.com/k2-fsa/icefall/blob/fb6a57e9e01dd8aae2af2a6b4568daad8bc8ab32/egs/librispeech/ASR/transducer_stateless/train.py#L195>. In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward; the decoder contains a 1024-dim embedding layer and a Conv1d with kernel size 2. The decoder architecture is modified from [Rnn-Transducer with Stateless Prediction Network](https://ieeexplore.ieee.org/document/9054419). A Conv1d layer is placed right after the input embedding layer. ----- ## Description This repo provides pre-trained transducer Conformer model for the LibriSpeech dataset using [icefall][icefall]. There are no RNNs in the decoder. The decoder is stateless and contains only an embedding layer and a Conv1d. The commands for training are: ``` cd egs/librispeech/ASR/ ./prepare.sh export CUDA_VISIBLE_DEVICES="0,1,2,3" ./transducer_stateless/train.py \ --world-size 4 \ --num-epochs 30 \ --start-epoch 0 \ --exp-dir transducer_stateless/exp-full \ --full-libri 1 \ --max-duration 250 \ --lr-factor 3 ``` The tensorboard training log can be found at <https://tensorboard.dev/experiment/PsJ3LgkEQfOmzedAlYfVeg/#scalars&_smoothingWeight=0> The command for decoding is: ``` epoch=20 avg=10 ## greedy search ./transducer_stateless/decode.py \ --epoch $epoch \ --avg $avg \ --exp-dir transducer_stateless/exp-full \ --bpe-model ./data/lang_bpe_500/bpe.model \ --max-duration 100 ## beam search ./transducer_stateless/decode.py \ --epoch $epoch \ --avg $avg \ --exp-dir transducer_stateless/exp-full \ --bpe-model ./data/lang_bpe_500/bpe.model \ --max-duration 100 \ --decoding-method beam_search \ --beam-size 4 ``` You can find the decoding log for the above command in this repo (in the folder `log`). The WERs for the test datasets are | | test-clean | test-other | comment | |---------------------------|------------|------------|------------------------------------------| | greedy search | 2.99 | 7.52 | --epoch 20, --avg 10, --max-duration 100 | | beam search (beam size 2) | 2.95 | 7.43 | | | beam search (beam size 3) | 2.94 | 7.37 | | | beam search (beam size 4) | 2.92 | 7.37 | | | beam search (beam size 5) | 2.93 | 7.38 | | | beam search (beam size 8) | 2.92 | 7.38 | | # File description - [log][log], this directory contains the decoding log and decoding results - [test_wavs][test_wavs], this directory contains wave files for testing the pre-trained model - [data][data], this directory contains files generated by [prepare.sh][prepare] - [exp][exp], this directory contains only one file: `preprained.pt` `exp/pretrained.pt` is generated by the following command: ``` ./transducer_stateless/export.py \ --epoch 20 \ --avg 10 \ --bpe-model data/lang_bpe_500/bpe.model \ --exp-dir transducer_stateless/exp-full ``` **HINT**: To use `pre-trained.pt` to compute the WER for test-clean and test-other, just do the following: ``` cp icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-22/exp/pretrained.pt \ /path/to/icefall/egs/librispeech/ASR/transducer_stateless/exp/epoch-999.pt ``` and pass `--epoch 999 --avg 1` to `transducer_stateless/decode.py`. [icefall]: https://github.com/k2-fsa/icefall [prepare]: https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/prepare.sh [exp]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-22/tree/main/exp [data]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-22/tree/main/data [test_wavs]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-22/tree/main/test_wavs [log]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-22/tree/main/log [icefall]: https://github.com/k2-fsa/icefall
{}
csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-22
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #region-us
Introduction ============ How to clone this repo ---------------------- Catuion: You have to run 'git lfs pull'. Otherwise, you will be SAD later. The model in this repo is trained using the commit 'fb6a57e9e01dd8aae2af2a6b4568daad8bc8ab32'. You can use to download 'icefall'. You can find the model information by visiting <URL In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward; the decoder contains a 1024-dim embedding layer and a Conv1d with kernel size 2. The decoder architecture is modified from Rnn-Transducer with Stateless Prediction Network. A Conv1d layer is placed right after the input embedding layer. --- Description ----------- This repo provides pre-trained transducer Conformer model for the LibriSpeech dataset using [icefall](URL). There are no RNNs in the decoder. The decoder is stateless and contains only an embedding layer and a Conv1d. The commands for training are: The tensorboard training log can be found at <URL The command for decoding is: You can find the decoding log for the above command in this repo (in the folder 'log'). The WERs for the test datasets are File description ================ * [log](URL), this directory contains the decoding log and decoding results * [test\_wavs](URL), this directory contains wave files for testing the pre-trained model * [data](URL), this directory contains files generated by <URL> * [exp](URL), this directory contains only one file: 'URL' 'exp/URL' is generated by the following command: HINT: To use 'URL' to compute the WER for test-clean and test-other, just do the following: and pass '--epoch 999 --avg 1' to 'transducer\_stateless/URL'.
[]
[ "TAGS\n#region-us \n" ]
null
null
# Introduction ## How to clone this repo ``` sudo apt-get install git-lfs git clone https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-27 cd icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-27 git lfs pull ``` **Catuion**: You have to run `git lfs pull`. Otherwise, you will be SAD later. The model in this repo is trained using the commit `14c93add507982306f5a478cd144e0e32e0f970d`. You can use ``` git clone https://github.com/k2-fsa/icefall cd icefall git checkout 14c93add507982306f5a478cd144e0e32e0f970d ``` to download `icefall`. You can find the model information by visiting <https://github.com/k2-fsa/icefall/blob/14c93add507982306f5a478cd144e0e32e0f970d/egs/librispeech/ASR/transducer_stateless/train.py#L198>. In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward; the decoder contains a 1024-dim embedding layer and a Conv1d with kernel size 2. The decoder architecture is modified from [Rnn-Transducer with Stateless Prediction Network](https://ieeexplore.ieee.org/document/9054419). A Conv1d layer is placed right after the input embedding layer. ----- ## Description This repo provides pre-trained transducer Conformer model for the LibriSpeech dataset using [icefall][icefall]. There are no RNNs in the decoder. The decoder is stateless and contains only an embedding layer and a Conv1d. The commands for training are: ``` cd egs/librispeech/ASR/ ./prepare.sh export CUDA_VISIBLE_DEVICES="0,1,2,3" ./transducer_stateless/train.py \ --world-size 4 \ --num-epochs 30 \ --start-epoch 0 \ --exp-dir transducer_stateless/exp-full \ --full-libri 1 \ --max-duration 250 \ --lr-factor 3 ``` The tensorboard training log can be found at <https://tensorboard.dev/experiment/Mjx7MeTgR3Oyr1yBCwjozw/> The command for decoding is: ``` epoch=29 avg=13 ## greedy search ./transducer_stateless/decode.py \ --epoch $epoch \ --avg $avg \ --exp-dir transducer_stateless/exp-full \ --bpe-model ./data/lang_bpe_500/bpe.model \ --max-duration 100 ## beam search ./transducer_stateless/decode.py \ --epoch $epoch \ --avg $avg \ --exp-dir transducer_stateless/exp-full \ --bpe-model ./data/lang_bpe_500/bpe.model \ --max-duration 100 \ --decoding-method beam_search \ --beam-size 4 ``` You can find the decoding log for the above command in this repo (in the folder `log`). The WERs for the test datasets are | | test-clean | test-other | comment | |---------------------------|------------|------------|------------------------------------------| | greedy search | 2.85 | 7.30 | --epoch 29, --avg 13, --max-duration 100 | | beam search (beam size 4) | 2.83 | 7.19 | | # File description - [log][log], this directory contains the decoding log and decoding results - [test_wavs][test_wavs], this directory contains wave files for testing the pre-trained model - [data][data], this directory contains files generated by [prepare.sh][prepare] - [exp][exp], this directory contains only one file: `preprained.pt` `exp/pretrained.pt` is generated by the following command: ``` ./transducer_stateless/export.py \ --epoch 29 \ --avg 13 \ --bpe-model data/lang_bpe_500/bpe.model \ --exp-dir transducer_stateless/exp-full ``` **HINT**: To use `pre-trained.pt` to compute the WER for test-clean and test-other, just do the following: ``` cp icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-27/exp/pretrained.pt \ /path/to/icefall/egs/librispeech/ASR/transducer_stateless/exp/epoch-999.pt ``` and pass `--epoch 999 --avg 1` to `transducer_stateless/decode.py`. [icefall]: https://github.com/k2-fsa/icefall [prepare]: https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/prepare.sh [exp]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-27/tree/main/exp [data]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-27/tree/main/data [test_wavs]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-27/tree/main/test_wavs [log]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-27/tree/main/log [icefall]: https://github.com/k2-fsa/icefall
{}
csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2021-12-27
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #region-us
Introduction ============ How to clone this repo ---------------------- Catuion: You have to run 'git lfs pull'. Otherwise, you will be SAD later. The model in this repo is trained using the commit '14c93add507982306f5a478cd144e0e32e0f970d'. You can use to download 'icefall'. You can find the model information by visiting <URL In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward; the decoder contains a 1024-dim embedding layer and a Conv1d with kernel size 2. The decoder architecture is modified from Rnn-Transducer with Stateless Prediction Network. A Conv1d layer is placed right after the input embedding layer. --- Description ----------- This repo provides pre-trained transducer Conformer model for the LibriSpeech dataset using [icefall](URL). There are no RNNs in the decoder. The decoder is stateless and contains only an embedding layer and a Conv1d. The commands for training are: The tensorboard training log can be found at <URL The command for decoding is: You can find the decoding log for the above command in this repo (in the folder 'log'). The WERs for the test datasets are File description ================ * [log](URL), this directory contains the decoding log and decoding results * [test\_wavs](URL), this directory contains wave files for testing the pre-trained model * [data](URL), this directory contains files generated by <URL> * [exp](URL), this directory contains only one file: 'URL' 'exp/URL' is generated by the following command: HINT: To use 'URL' to compute the WER for test-clean and test-other, just do the following: and pass '--epoch 999 --avg 1' to 'transducer\_stateless/URL'.
[]
[ "TAGS\n#region-us \n" ]
null
null
# Introduction ## How to clone this repo ``` sudo apt-get install git-lfs git clone https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2022-01-10 cd icefall-asr-librispeech-transducer-stateless-bpe-500-2022-01-10 git lfs pull ``` **Catuion**: You have to run `git lfs pull`. Otherwise, you will be SAD later. The model in this repo is trained using the commit `4c1b3665ee6efb935f4dd93a80ff0e154b13efb6`. You can use ``` git clone https://github.com/k2-fsa/icefall cd icefall git checkout 4c1b3665ee6efb935f4dd93a80ff0e154b13efb6 ``` to download `icefall`. You can find the model information by visiting <https://github.com/k2-fsa/icefall/blob/273e5fb2f3ac2620bafdffe2689b8b3ee10173d3/egs/librispeech/ASR/transducer_stateless/train.py#L198>. In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward; the decoder contains a 1024-dim embedding layer and a Conv1d with kernel size 2. The decoder architecture is modified from [Rnn-Transducer with Stateless Prediction Network](https://ieeexplore.ieee.org/document/9054419). A Conv1d layer is placed right after the input embedding layer. ----- ## Description This repo provides pre-trained transducer Conformer model for the LibriSpeech dataset using [icefall][icefall]. There are no RNNs in the decoder. The decoder is stateless and contains only an embedding layer and a Conv1d. The commands for training are: ``` cd egs/librispeech/ASR/ ./prepare.sh export CUDA_VISIBLE_DEVICES="0,1,2,3" ./transducer_stateless/train.py \ --world-size 4 \ --num-epochs 76 \ --start-epoch 0 \ --exp-dir transducer_stateless/exp-full \ --full-libri 1 \ --max-duration 250 \ --lr-factor 3 ``` The tensorboard training log can be found at <https://tensorboard.dev/experiment/qGdqzHnxS0WJ695OXfZDzA/> The command for decoding is: ``` epoch=71 avg=15 ## greedy search ./transducer_stateless/decode.py \ --epoch $epoch \ --avg $avg \ --exp-dir transducer_stateless/exp-full \ --bpe-model ./data/lang_bpe_500/bpe.model \ --max-duration 100 ## beam search ./transducer_stateless/decode.py \ --epoch $epoch \ --avg $avg \ --exp-dir transducer_stateless/exp-full \ --bpe-model ./data/lang_bpe_500/bpe.model \ --max-duration 100 \ --decoding-method beam_search \ --beam-size 4 ``` You can find the decoding log for the above command in this repo (in the folder `log`). The WERs for the test datasets are | | test-clean | test-other | comment | |---------------------------|------------|------------|------------------------------------------| | greedy search | 2.69 | 6.81 | --epoch 71, --avg 15, --max-duration 100 | | beam search (beam size 4) | 2.68 | 6.72 | --epoch 71, --avg 15, --max-duration 100 | # File description - [log][log], this directory contains the decoding log and decoding results - [test_wavs][test_wavs], this directory contains wave files for testing the pre-trained model - [data][data], this directory contains files generated by [prepare.sh][prepare] - [exp][exp], this directory contains only one file: `preprained.pt` `exp/pretrained.pt` is generated by the following command: ``` ./transducer_stateless/export.py \ --epoch 71 \ --avg 15 \ --bpe-model data/lang_bpe_500/bpe.model \ --exp-dir transducer_stateless/exp-full ``` **HINT**: To use `pre-trained.pt` to compute the WER for test-clean and test-other, just do the following: ``` cp icefall-asr-librispeech-transducer-stateless-bpe-500-2022-01-10/exp/pretrained.pt \ /path/to/icefall/egs/librispeech/ASR/transducer_stateless/exp/epoch-999.pt ``` and pass `--epoch 999 --avg 1` to `transducer_stateless/decode.py`. [icefall]: https://github.com/k2-fsa/icefall [prepare]: https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/prepare.sh [exp]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2022-01-10/tree/main/exp [data]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2022-01-10/tree/main/data [test_wavs]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2022-01-10/tree/main/test_wavs [log]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2022-01-10/tree/main/log [icefall]: https://github.com/k2-fsa/icefall
{}
csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2022-01-10
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #region-us
Introduction ============ How to clone this repo ---------------------- Catuion: You have to run 'git lfs pull'. Otherwise, you will be SAD later. The model in this repo is trained using the commit '4c1b3665ee6efb935f4dd93a80ff0e154b13efb6'. You can use to download 'icefall'. You can find the model information by visiting <URL In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward; the decoder contains a 1024-dim embedding layer and a Conv1d with kernel size 2. The decoder architecture is modified from Rnn-Transducer with Stateless Prediction Network. A Conv1d layer is placed right after the input embedding layer. --- Description ----------- This repo provides pre-trained transducer Conformer model for the LibriSpeech dataset using [icefall](URL). There are no RNNs in the decoder. The decoder is stateless and contains only an embedding layer and a Conv1d. The commands for training are: The tensorboard training log can be found at <URL The command for decoding is: You can find the decoding log for the above command in this repo (in the folder 'log'). The WERs for the test datasets are File description ================ * [log](URL), this directory contains the decoding log and decoding results * [test\_wavs](URL), this directory contains wave files for testing the pre-trained model * [data](URL), this directory contains files generated by <URL> * [exp](URL), this directory contains only one file: 'URL' 'exp/URL' is generated by the following command: HINT: To use 'URL' to compute the WER for test-clean and test-other, just do the following: and pass '--epoch 999 --avg 1' to 'transducer\_stateless/URL'.
[]
[ "TAGS\n#region-us \n" ]
null
null
# Introduction ## How to clone this repo ``` sudo apt-get install git-lfs git clone https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2022-02-07 cd icefall-asr-librispeech-transducer-stateless-bpe-500-2022-02-07 git lfs pull ``` **Catuion**: You have to run `git lfs pull`. Otherwise, you will be SAD later. The model in this repo is trained using the commit `a8150021e01d34ecbd6198fe03a57eacf47a16f2`. You can use ``` git clone https://github.com/k2-fsa/icefall cd icefall git checkout a8150021e01d34ecbd6198fe03a57eacf47a16f2 ``` to download `icefall`. You can find the model information by visiting <https://github.com/k2-fsa/icefall/blob/a8150021e01d34ecbd6198fe03a57eacf47a16f2/egs/librispeech/ASR/transducer_stateless/train.py#L198>. In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward; the decoder contains a 1024-dim embedding layer and a Conv1d with kernel size 2. The decoder architecture is modified from [Rnn-Transducer with Stateless Prediction Network](https://ieeexplore.ieee.org/document/9054419). A Conv1d layer is placed right after the input embedding layer. ----- ## Description This repo provides pre-trained transducer Conformer model for the LibriSpeech dataset using [icefall][icefall]. There are no RNNs in the decoder. The decoder is stateless and contains only an embedding layer and a Conv1d. The commands for training are: ``` cd egs/librispeech/ASR/ ./prepare.sh export CUDA_VISIBLE_DEVICES="0,1,2,3" ./transducer_stateless/train.py \ --world-size 4 \ --num-epochs 76 \ --start-epoch 0 \ --exp-dir transducer_stateless/exp-full \ --full-libri 1 \ --max-duration 300 \ --lr-factor 5 \ --bpe-model data/lang_bpe_500/bpe.model \ --modified-transducer-prob 0.25 ``` The tensorboard training log can be found at <https://tensorboard.dev/experiment/qgvWkbF2R46FYA6ZMNmOjA/> The command for decoding is: ``` epoch=63 avg=19 ## greedy search for sym in 1 2 3; do ./transducer_stateless/decode.py \ --epoch $epoch \ --avg $avg \ --exp-dir transducer_stateless/exp-full \ --bpe-model ./data/lang_bpe_500/bpe.model \ --max-duration 100 \ --max-sym-per-frame $sym done ## modified beam search ./transducer_stateless/decode.py \ --epoch $epoch \ --avg $avg \ --exp-dir transducer_stateless/exp-full \ --bpe-model ./data/lang_bpe_500/bpe.model \ --max-duration 100 \ --context-size 2 \ --decoding-method modified_beam_search \ --beam-size 4 ``` You can find the decoding log for the above command in this repo (in the folder `log`). The WERs for the test datasets are | | test-clean | test-other | comment | |-------------------------------------|------------|------------|------------------------------------------| | greedy search (max sym per frame 1) | 2.67 | 6.67 | --epoch 63, --avg 19, --max-duration 100 | | greedy search (max sym per frame 2) | 2.67 | 6.67 | --epoch 63, --avg 19, --max-duration 100 | | greedy search (max sym per frame 3) | 2.67 | 6.67 | --epoch 63, --avg 19, --max-duration 100 | | modified beam search (beam size 4) | 2.67 | 6.57 | --epoch 63, --avg 19, --max-duration 100 | # File description - [log][log], this directory contains the decoding log and decoding results - [test_wavs][test_wavs], this directory contains wave files for testing the pre-trained model - [data][data], this directory contains files generated by [prepare.sh][prepare] - [exp][exp], this directory contains only one file: `preprained.pt` `exp/pretrained.pt` is generated by the following command: ``` ./transducer_stateless/export.py \ --epoch 63 \ --avg 19 \ --bpe-model data/lang_bpe_500/bpe.model \ --exp-dir transducer_stateless/exp-full ``` **HINT**: To use `pretrained.pt` to compute the WER for test-clean and test-other, just do the following: ``` cp icefall-asr-librispeech-transducer-stateless-bpe-500-2022-02-07/exp/pretrained.pt \ /path/to/icefall/egs/librispeech/ASR/transducer_stateless/exp/epoch-999.pt ``` and pass `--epoch 999 --avg 1` to `transducer_stateless/decode.py`. [icefall]: https://github.com/k2-fsa/icefall [prepare]: https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/prepare.sh [exp]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2022-02-07/tree/main/exp [data]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2022-02-07/tree/main/data [test_wavs]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2022-02-07/tree/main/test_wavs [log]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2022-02-07/tree/main/log [icefall]: https://github.com/k2-fsa/icefall
{}
csukuangfj/icefall-asr-librispeech-transducer-stateless-bpe-500-2022-02-07
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #region-us
Introduction ============ How to clone this repo ---------------------- Catuion: You have to run 'git lfs pull'. Otherwise, you will be SAD later. The model in this repo is trained using the commit 'a8150021e01d34ecbd6198fe03a57eacf47a16f2'. You can use to download 'icefall'. You can find the model information by visiting <URL In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward; the decoder contains a 1024-dim embedding layer and a Conv1d with kernel size 2. The decoder architecture is modified from Rnn-Transducer with Stateless Prediction Network. A Conv1d layer is placed right after the input embedding layer. --- Description ----------- This repo provides pre-trained transducer Conformer model for the LibriSpeech dataset using [icefall](URL). There are no RNNs in the decoder. The decoder is stateless and contains only an embedding layer and a Conv1d. The commands for training are: The tensorboard training log can be found at <URL The command for decoding is: You can find the decoding log for the above command in this repo (in the folder 'log'). The WERs for the test datasets are File description ================ * [log](URL), this directory contains the decoding log and decoding results * [test\_wavs](URL), this directory contains wave files for testing the pre-trained model * [data](URL), this directory contains files generated by <URL> * [exp](URL), this directory contains only one file: 'URL' 'exp/URL' is generated by the following command: HINT: To use 'URL' to compute the WER for test-clean and test-other, just do the following: and pass '--epoch 999 --avg 1' to 'transducer\_stateless/URL'.
[]
[ "TAGS\n#region-us \n" ]
null
k2
# Introduction This repo contains pre-trained model using <https://github.com/k2-fsa/icefall/pull/213>. It is trained on full LibriSpeech dataset. Also, it uses the `L` subset from [GigaSpeech](https://github.com/SpeechColab/GigaSpeech) as extra training data. ## How to clone this repo ``` sudo apt-get install git-lfs git clone https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-multi-datasets-bpe-500-2022-03-01 cd icefall-asr-librispeech-transducer-stateless-multi-datasets-bpe-500-2022-03-01 git lfs pull ``` **Catuion**: You have to run `git lfs pull`. Otherwise, you will be SAD later. The model in this repo is trained using the commit `2332ba312d7ce72f08c7bac1e3312f7e3dd722dc`. You can use ``` git clone https://github.com/k2-fsa/icefall cd icefall git checkout 2332ba312d7ce72f08c7bac1e3312f7e3dd722dc ``` to download `icefall`. You can find the model information by visiting <https://github.com/k2-fsa/icefall/blob/2332ba312d7ce72f08c7bac1e3312f7e3dd722dc/egs/librispeech/ASR/transducer_stateless_multi_datasets/train.py#L218> In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward; the decoder contains a 1024-dim embedding layer and a Conv1d with kernel size 2. The decoder architecture is modified from [Rnn-Transducer with Stateless Prediction Network](https://ieeexplore.ieee.org/document/9054419). A Conv1d layer is placed right after the input embedding layer. ----- ## Description This repo provides pre-trained transducer Conformer model for the LibriSpeech dataset using [icefall][icefall]. There are no RNNs in the decoder. The decoder is stateless and contains only an embedding layer and a Conv1d. The commands for training are: ``` cd egs/librispeech/ASR/ ./prepare.sh ./prepare_giga_speech.sh export CUDA_VISIBLE_DEVICES="0,1,2,3" ./transducer_stateless_multi_datasets/train.py \ --world-size 4 \ --num-epochs 40 \ --start-epoch 0 \ --exp-dir transducer_stateless_multi_datasets/exp-full-2 \ --full-libri 1 \ --max-duration 300 \ --lr-factor 5 \ --bpe-model data/lang_bpe_500/bpe.model \ --modified-transducer-prob 0.25 \ --giga-prob 0.2 ``` The tensorboard training log can be found at <https://tensorboard.dev/experiment/xmo5oCgrRVelH9dCeOkYBg/> The command for decoding is: ```bash epoch=39 avg=15 sym=1 # greedy search ./transducer_stateless_multi_datasets/decode.py \ --epoch $epoch \ --avg $avg \ --exp-dir transducer_stateless_multi_datasets/exp-full-2 \ --bpe-model ./data/lang_bpe_500/bpe.model \ --max-duration 100 \ --context-size 2 \ --max-sym-per-frame $sym # modified beam search ./transducer_stateless_multi_datasets/decode.py \ --epoch $epoch \ --avg $avg \ --exp-dir transducer_stateless_multi_datasets/exp-full-2 \ --bpe-model ./data/lang_bpe_500/bpe.model \ --max-duration 100 \ --context-size 2 \ --decoding-method modified_beam_search \ --beam-size 4 ``` You can find the decoding log for the above command in this repo (in the folder `log`). The WERs for the test datasets are | | test-clean | test-other | comment | |-------------------------------------|------------|------------|------------------------------------------| | greedy search (max sym per frame 1) | 2.64 | 6.55 | --epoch 39, --avg 15, --max-duration 100 | | modified beam search (beam size 4) | 2.61 | 6.46 | --epoch 39, --avg 15, --max-duration 100 | # File description - [log][log], this directory contains the decoding log and decoding results - [test_wavs][test_wavs], this directory contains wave files for testing the pre-trained model - [data][data], this directory contains files generated by [prepare.sh][prepare] - [exp][exp], this directory contains only one file: `preprained.pt` `exp/pretrained.pt` is generated by the following command: ```bash ./transducer_stateless_multi_datasets/export.py \ --epoch 39 \ --avg 15 \ --bpe-model data/lang_bpe_500/bpe.model \ --exp-dir transducer_stateless_multi_datasets/exp-full-2 ``` **HINT**: To use `pretrained.pt` to compute the WER for test-clean and test-other, just do the following: ``` cp icefall-asr-librispeech-transducer-stateless-multi-datasets-bpe-500-2022-03-01/exp/pretrained.pt \ /path/to/icefall/egs/librispeech/ASR/transducer_stateless_multi_datasets/exp/epoch-999.pt ``` and pass `--epoch 999 --avg 1` to `transducer_stateless_multi_datasets/decode.py`. [icefall]: https://github.com/k2-fsa/icefall [prepare]: https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/prepare.sh [exp]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-multi-datasets-bpe-500-2022-03-01/tree/main/exp [data]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-multi-datasets-bpe-500-2022-03-01/tree/main/data [test_wavs]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-multi-datasets-bpe-500-2022-03-01/tree/main/test_wavs [log]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-transducer-stateless-multi-datasets-bpe-500-2022-03-01/tree/main/log [icefall]: https://github.com/k2-fsa/icefall
{"language": "en", "license": "apache-2.0", "tags": ["icefall", "k2", "transducer", "librispeech", "ASR", "stateless transducer", "PyTorch", "RNN-T", "speech recognition"], "datasets": ["librispeech"], "metrics": ["WER"]}
csukuangfj/icefall-asr-librispeech-transducer-stateless-multi-datasets-bpe-500-2022-03-01
null
[ "k2", "icefall", "transducer", "librispeech", "ASR", "stateless transducer", "PyTorch", "RNN-T", "speech recognition", "en", "dataset:librispeech", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #k2 #icefall #transducer #librispeech #ASR #stateless transducer #PyTorch #RNN-T #speech recognition #en #dataset-librispeech #license-apache-2.0 #region-us
Introduction ============ This repo contains pre-trained model using <URL It is trained on full LibriSpeech dataset. Also, it uses the 'L' subset from GigaSpeech as extra training data. How to clone this repo ---------------------- Catuion: You have to run 'git lfs pull'. Otherwise, you will be SAD later. The model in this repo is trained using the commit '2332ba312d7ce72f08c7bac1e3312f7e3dd722dc'. You can use to download 'icefall'. You can find the model information by visiting <URL In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward; the decoder contains a 1024-dim embedding layer and a Conv1d with kernel size 2. The decoder architecture is modified from Rnn-Transducer with Stateless Prediction Network. A Conv1d layer is placed right after the input embedding layer. --- Description ----------- This repo provides pre-trained transducer Conformer model for the LibriSpeech dataset using [icefall](URL). There are no RNNs in the decoder. The decoder is stateless and contains only an embedding layer and a Conv1d. The commands for training are: The tensorboard training log can be found at <URL The command for decoding is: You can find the decoding log for the above command in this repo (in the folder 'log'). The WERs for the test datasets are File description ================ * [log](URL), this directory contains the decoding log and decoding results * [test\_wavs](URL), this directory contains wave files for testing the pre-trained model * [data](URL), this directory contains files generated by <URL> * [exp](URL), this directory contains only one file: 'URL' 'exp/URL' is generated by the following command: HINT: To use 'URL' to compute the WER for test-clean and test-other, just do the following: and pass '--epoch 999 --avg 1' to 'transducer\_stateless\_multi\_datasets/URL'.
[]
[ "TAGS\n#k2 #icefall #transducer #librispeech #ASR #stateless transducer #PyTorch #RNN-T #speech recognition #en #dataset-librispeech #license-apache-2.0 #region-us \n" ]
null
null
## Pre-trained TDNN models for the yesno dataset with icefall. Refer to <https://github.com/k2-fsa/icefall/tree/master/egs/yesno/ASR> for more information about this pre-trained model. You can find usage instructions there. ## Sound files for testing the pre-trained model The folder `test_waves` contains test sound files. They are downloaded from <https://www.openslr.org/1/>. There are 60 files in the dataset, 30 are used for training. The remaining 30 files, contained in `test_waves` are kept for testing. The code for splitting the dataset can be found at <https://github.com/lhotse-speech/lhotse/blob/master/lhotse/recipes/yesno.py#L138> ```python wave_files = list(corpus_dir.glob("*.wav")) assert len(wave_files) == 60 wave_files.sort() train_set = wave_files[::2] test_set = wave_files[1::2] assert len(train_set) == 30 assert len(test_set) == 30 ```
{}
csukuangfj/icefall_asr_yesno_tdnn
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #region-us
## Pre-trained TDNN models for the yesno dataset with icefall. Refer to <URL for more information about this pre-trained model. You can find usage instructions there. ## Sound files for testing the pre-trained model The folder 'test_waves' contains test sound files. They are downloaded from <URL There are 60 files in the dataset, 30 are used for training. The remaining 30 files, contained in 'test_waves' are kept for testing. The code for splitting the dataset can be found at <URL
[ "## Pre-trained TDNN models for the yesno dataset with icefall.\n\nRefer to <URL\nfor more information about this pre-trained model.\n\nYou can find usage instructions there.", "## Sound files for testing the pre-trained model\n\nThe folder 'test_waves' contains test sound files. They\nare downloaded from <URL\n\nThere are 60 files in the dataset, 30 are used for training.\nThe remaining 30 files, contained in 'test_waves' are kept for testing.\n\nThe code for splitting the dataset can be found at\n<URL" ]
[ "TAGS\n#region-us \n", "## Pre-trained TDNN models for the yesno dataset with icefall.\n\nRefer to <URL\nfor more information about this pre-trained model.\n\nYou can find usage instructions there.", "## Sound files for testing the pre-trained model\n\nThe folder 'test_waves' contains test sound files. They\nare downloaded from <URL\n\nThere are 60 files in the dataset, 30 are used for training.\nThe remaining 30 files, contained in 'test_waves' are kept for testing.\n\nThe code for splitting the dataset can be found at\n<URL" ]
null
null
See https://colab.research.google.com/drive/14MozS-9jWD3XQ0o-dZ-meqnblgHs70P2?usp=sharing
{}
csukuangfj/test-data-for-optimized-transducer
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #region-us
See URL
[]
[ "TAGS\n#region-us \n" ]
null
null
# Introduction This repo contains the benchmark results for <https://github.com/csukuangfj/transducer-loss-benchmarking> ## Usage First, install `git-lfs`. Second, use the following command to clone this repo: ```bash git lfs install git clone https://huggingface.co/csukuangfj/transducer-loss-benchmarking ``` **Caution**: You have to run `git lfs install` first. Otherwise, you will be **SAD** later. Third, ``` pip install torch-tb-profiler cd transducer-loss-benchmarking tensorboard --logdir ./log/torchaudio-30 --port 6006 tensorboard --logdir ./log/optimized_transducer-30 --port 6007 ``` Fourth, open your browser and go to - <http://localhost:6006/#pytorch_profiler> - <http://localhost:6006/#pytorch_profiler> You will see the following images: ![](./torchaudio-30.png) ![](./optimized_transducer-30.png)
{}
csukuangfj/transducer-loss-benchmarking
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #region-us
# Introduction This repo contains the benchmark results for <URL ## Usage First, install 'git-lfs'. Second, use the following command to clone this repo: Caution: You have to run 'git lfs install' first. Otherwise, you will be SAD later. Third, Fourth, open your browser and go to - <http://localhost:6006/#pytorch_profiler> - <http://localhost:6006/#pytorch_profiler> You will see the following images: ![](./URL) ![](./optimized_transducer-URL)
[ "# Introduction\n\nThis repo contains the benchmark results for <URL", "## Usage\n\nFirst, install 'git-lfs'.\n\nSecond, use the following command to clone this repo:\n\n\n\nCaution: You have to run 'git lfs install' first. Otherwise, you will be SAD later.\n\nThird,\n\n\nFourth, open your browser and go to\n\n- <http://localhost:6006/#pytorch_profiler>\n- <http://localhost:6006/#pytorch_profiler>\n\nYou will see the following images:\n\n![](./URL)\n\n![](./optimized_transducer-URL)" ]
[ "TAGS\n#region-us \n", "# Introduction\n\nThis repo contains the benchmark results for <URL", "## Usage\n\nFirst, install 'git-lfs'.\n\nSecond, use the following command to clone this repo:\n\n\n\nCaution: You have to run 'git lfs install' first. Otherwise, you will be SAD later.\n\nThird,\n\n\nFourth, open your browser and go to\n\n- <http://localhost:6006/#pytorch_profiler>\n- <http://localhost:6006/#pytorch_profiler>\n\nYou will see the following images:\n\n![](./URL)\n\n![](./optimized_transducer-URL)" ]
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Cantonese Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Cantonese using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "zh-HK", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("ctl/wav2vec2-large-xlsr-cantonese") model = Wav2Vec2ForCTC.from_pretrained("ctl/wav2vec2-large-xlsr-cantonese") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Chinese (Hong Kong) test data of Common Voice. ```python !pip install jiwer import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re import argparse lang_id = "zh-HK" model_id = "ctl/wav2vec2-large-xlsr-cantonese" chars_to_ignore_regex = '[\,\?\.\!\-\;\:"\“\%\‘\”\�\.\⋯\!\-\:\–\。\》\,\)\,\?\;\~\~\…\︰\,\(\」\‧\《\﹔\、\—\/\,\「\﹖\·\']' test_dataset = load_dataset("common_voice", f"{lang_id}", split="test") cer = load_metric("cer") processor = Wav2Vec2Processor.from_pretrained(f"{model_id}") model = Wav2Vec2ForCTC.from_pretrained(f"{model_id}") model.to("cuda") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=16) print("CER: {:2f}".format(100 * cer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 15.51 % ## Training The Common Voice `train`, `validation` were used for training. The script used for training will be posted [here](https://github.com/chutaklee/CantoASR)
{"language": ["yue"], "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["cer"], "language_bcp47": ["zh-HK"], "model-index": [{"name": "wav2vec2-large-xlsr-cantonese", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice zh-HK", "type": "common_voice", "args": "zh-HK"}, "metrics": [{"type": "cer", "value": 15.36, "name": "Test CER"}]}]}]}
ctl/wav2vec2-large-xlsr-cantonese
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "yue", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "yue" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #yue #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
# Wav2Vec2-Large-XLSR-53-Cantonese Fine-tuned facebook/wav2vec2-large-xlsr-53 on Cantonese using the Common Voice. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Chinese (Hong Kong) test data of Common Voice. Test Result: 15.51 % ## Training The Common Voice 'train', 'validation' were used for training. The script used for training will be posted here
[ "# Wav2Vec2-Large-XLSR-53-Cantonese\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Cantonese using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Chinese (Hong Kong) test data of Common Voice. \n\n\n\n\n\nTest Result: 15.51 %", "## Training\n\nThe Common Voice 'train', 'validation' were used for training.\n\nThe script used for training will be posted here" ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #yue #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n", "# Wav2Vec2-Large-XLSR-53-Cantonese\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Cantonese using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Chinese (Hong Kong) test data of Common Voice. \n\n\n\n\n\nTest Result: 15.51 %", "## Training\n\nThe Common Voice 'train', 'validation' were used for training.\n\nThe script used for training will be posted here" ]
text-generation
transformers
# My Awesome Model
{"tags": ["conversational"]}
cumtowndiscord/DialoGPT-small-joshua
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# My Awesome Model
[ "# My Awesome Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# My Awesome Model" ]
token-classification
transformers
Fine tuning LayoutLMv2 model on Vietnamese bill dataset ```python from transformers import LayoutLMv2ForTokenClassification model = LayoutLMv2ForTokenClassification.from_pretrained('cuongngm/layoutlm-bill', num_labels=len(labels)) ``` labels = ['price', 'storename', 'total_cost', 'phone', 'address', 'unitprice', 'item', 'subitem', 'other', 'time', 'unit', 'total refunds', 'total_qty', 'seller', 'total_received']
{}
cuongngm/layoutlm-bill
null
[ "transformers", "pytorch", "layoutlmv2", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #layoutlmv2 #token-classification #autotrain_compatible #endpoints_compatible #region-us
Fine tuning LayoutLMv2 model on Vietnamese bill dataset labels = ['price', 'storename', 'total_cost', 'phone', 'address', 'unitprice', 'item', 'subitem', 'other', 'time', 'unit', 'total refunds', 'total_qty', 'seller', 'total_received']
[]
[ "TAGS\n#transformers #pytorch #layoutlmv2 #token-classification #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
cutiebunny639/DialoGPT-small-harry
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Harry Potter DialoGPT Model
[ "# Harry Potter DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Harry Potter DialoGPT Model" ]
text-classification
transformers
**Disclaimer**: *This model is still under testing and may change in the future, we will try to keep backwards compatibility. For any questions reach us at [email protected]* # MediaWatch News Topics (Greek) Fine-tuned model for multi-label text-classification (SequenceClassification), based on [roberta-el-news](https://huggingface.co/cvcio/roberta-el-news), using [Hugging Face's](https://huggingface.co/) [Transformers](https://github.com/huggingface/transformers) library. This model is to classify news in real-time on upto 33 topics including: *AFFAIRS*, *AGRICULTURE*, *ARTS_AND_CULTURE*, *BREAKING_NEWS*, *BUSINESS*, *COVID*, *ECONOMY*, *EDUCATION*, *ELECTIONS*, *ENTERTAINMENT*, *ENVIRONMENT*, *FOOD*, *HEALTH*, *INTERNATIONAL*, *LAW_AND_ORDER*, *MILITARY*, *NON_PAPER*, *OPINION*, *POLITICS*, *REFUGEE*, *REGIONAL*, *RELIGION*, *SCIENCE*, *SOCIAL_MEDIA*, *SOCIETY*, *SPORTS*, *TECH*, *TOURISM*, *TRANSPORT*, *TRAVEL*, *WEATHER*, *CRIME*, *JUSTICE*. ## How to use You can use this model directly with a pipeline for text-classification: ```python from transformers import pipeline pipe = pipeline( task="text-classification", model="cvcio/mediawatch-el-topics", tokenizer="cvcio/roberta-el-news" # or cvcio/mediawatch-el-topics ) topics = pipe( "Η βιασύνη αρκετών χωρών να άρουν τους περιορισμούς κατά του κορονοϊού, "+ "αν όχι να κηρύξουν το τέλος της πανδημίας, με το σκεπτικό ότι έφτασε "+ "πλέον η ώρα να συμβιώσουμε με την Covid-19, έχει κάνει μερικούς πιο "+ "επιφυλακτικούς επιστήμονες να προειδοποιούν ότι πρόκειται μάλλον "+ "για «ενδημική αυταπάτη» και ότι είναι πρόωρη τέτοια υπερβολική "+ "χαλάρωση. Καθώς τα κρούσματα της Covid-19, μετά το αιφνιδιαστικό "+ "μαζικό κύμα της παραλλαγής Όμικρον, εμφανίζουν τάση υποχώρησης σε "+ "Ευρώπη και Βόρεια Αμερική, όπου περισσεύει η κόπωση μεταξύ των "+ "πολιτών μετά από δύο χρόνια πανδημίας, ειδικοί και μη αδημονούν να "+ "«ξεμπερδέψουν» με τον κορονοϊό.", padding=True, truncation=True, max_length=512, return_all_scores=True ) print(topics) # outputs [ [ {'label': 'AFFAIRS', 'score': 0.0018806682201102376}, {'label': 'AGRICULTURE', 'score': 0.00014653144171461463}, {'label': 'ARTS_AND_CULTURE', 'score': 0.0012948638759553432}, {'label': 'BREAKING_NEWS', 'score': 0.0001729220530251041}, {'label': 'BUSINESS', 'score': 0.0028276608791202307}, {'label': 'COVID', 'score': 0.4407998025417328}, {'label': 'ECONOMY', 'score': 0.039826102554798126}, {'label': 'EDUCATION', 'score': 0.0019098613411188126}, {'label': 'ELECTIONS', 'score': 0.0003333651984576136}, {'label': 'ENTERTAINMENT', 'score': 0.004249618388712406}, {'label': 'ENVIRONMENT', 'score': 0.0015828514005988836}, {'label': 'FOOD', 'score': 0.0018390495097264647}, {'label': 'HEALTH', 'score': 0.1204477995634079}, {'label': 'INTERNATIONAL', 'score': 0.25892165303230286}, {'label': 'LAW_AND_ORDER', 'score': 0.07646272331476212}, {'label': 'MILITARY', 'score': 0.00033025629818439484}, {'label': 'NON_PAPER', 'score': 0.011991199105978012}, {'label': 'OPINION', 'score': 0.16166265308856964}, {'label': 'POLITICS', 'score': 0.0008890336030162871}, {'label': 'REFUGEE', 'score': 0.0011504743015393615}, {'label': 'REGIONAL', 'score': 0.0008734092116355896}, {'label': 'RELIGION', 'score': 0.0009001944563351572}, {'label': 'SCIENCE', 'score': 0.05075162276625633}, {'label': 'SOCIAL_MEDIA', 'score': 0.00039615994319319725}, {'label': 'SOCIETY', 'score': 0.0043518817983567715}, {'label': 'SPORTS', 'score': 0.002416545059531927}, {'label': 'TECH', 'score': 0.0007818648009561002}, {'label': 'TOURISM', 'score': 0.011870541609823704}, {'label': 'TRANSPORT', 'score': 0.0009422845905646682}, {'label': 'TRAVEL', 'score': 0.03004464879631996}, {'label': 'WEATHER', 'score': 0.00040286066359840333}, {'label': 'CRIME', 'score': 0.0005416403291746974}, {'label': 'JUSTICE', 'score': 0.000990519649349153} ] ] ``` ## Labels All labels, except *NON_PAPER*, retrieved by source articles during the data collection step, without any preprocessing, assuming that journalists and newsrooms assign correct tags to the articles. We disregarded all articles with more than 6 tags to reduce bias and tag manipulation. | label | roc_auc | samples | |-------:|--------:|--------:| | AFFAIRS | 0.9872 | 6,314 | | AGRICULTURE | 0.9799 | 1,254 | | ARTS_AND_CULTURE | 0.9838 | 15,968 | | BREAKING_NEWS | 0.9675 | 827 | | BUSINESS | 0.9811 | 6,507 | | COVID | 0.9620 | 50,000 | | CRIME | 0.9885 | 34,421 | | ECONOMY | 0.9765 | 45,474 | | EDUCATION | 0.9865 | 10,111 | | ELECTIONS | 0.9940 | 7,571 | | ENTERTAINMENT | 0.9925 | 23,323 | | ENVIRONMENT | 0.9847 | 23,060 | | FOOD | 0.9934 | 3,712 | | HEALTH | 0.9723 | 16,852 | | INTERNATIONAL | 0.9624 | 50,000 | | JUSTICE | 0.9862 | 4,860 | | LAW_AND_ORDER | 0.9177 | 50,000 | | MILITARY | 0.9838 | 6,536 | | NON_PAPER | 0.9595 | 4,589 | | OPINION | 0.9624 | 6,296 | | POLITICS | 0.9773 | 50,000 | | REFUGEE | 0.9949 | 4,536 | | REGIONAL | 0.9520 | 50,000 | | RELIGION | 0.9922 | 11,533 | | SCIENCE | 0.9837 | 1,998 | | SOCIAL_MEDIA | 0.991 | 6,212 | | SOCIETY | 0.9439 | 50,000 | | SPORTS | 0.9939 | 31,396 | | TECH | 0.9923 | 8,225 | | TOURISM | 0.9900 | 8,081 | | TRANSPORT | 0.9879 | 3,211 | | TRAVEL | 0.9832 | 4,638 | | WEATHER | 0.9950 | 19,931 | | loss | 0.0533 | - | | roc_auc | 0.9855 | - | ## Pretraining The model was pretrained using an NVIDIA A10 GPU for 15 epochs (~ approx 59K steps, 8 hours training) with a batch size of 128. The optimizer used is Adam with a learning rate of 1e-5, and weight decay 0.01. We used roc_auc_micro to evaluate the results. ### Framework versions - Transformers 4.13.0 - Pytorch 1.9.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3 ## Authors Dimitris Papaevagelou - [@andefined](https://github.com/andefined) ## About Us [Civic Information Office](https://cvcio.org/) is a Non Profit Organization based in Athens, Greece focusing on creating technology and research products for the public interest.
{"language": "el", "license": "gpl-3.0", "tags": ["roberta", "Greek", "news", "transformers", "text-classification"], "pipeline_tag": "text-classification", "widget": [{"text": "\u03a0\u03b1\u03c1\u2019 \u03bf\u03bb\u03af\u03b3\u03bf\u03bd \u00ab\u03b8\u03b5\u03c1\u03bc\u03cc\u00bb \u03b5\u03c0\u03b5\u03b9\u03c3\u03cc\u03b4\u03b9\u03bf \u03c4\u03bf\u03c5\u03c1\u03ba\u03b9\u03ba\u03bf\u03cd \u03c0\u03bf\u03bb\u03b5\u03bc\u03b9\u03ba\u03bf\u03cd \u03c0\u03bb\u03bf\u03af\u03bf\u03c5 \u03bc\u03b5 \u03b5\u03bb\u03bb\u03b7\u03bd\u03b9\u03ba\u03cc \u03c9\u03ba\u03b5\u03b1\u03bd\u03bf\u03b3\u03c1\u03b1\u03c6\u03b9\u03ba\u03cc \u03c3\u03c4\u03b7\u03bd \u03c0\u03b5\u03c1\u03b9\u03bf\u03c7\u03ae \u03bc\u03b5\u03c4\u03b1\u03be\u03cd \u03a1\u03cc\u03b4\u03bf\u03c5 \u03ba\u03b1\u03b9 \u039a\u03b1\u03c3\u03c4\u03b5\u03bb\u03cc\u03c1\u03b9\u03b6\u03bf\u03c5, \u03c3\u03c4\u03bf \u03b4\u03b9\u03ac\u03c3\u03c4\u03b7\u03bc\u03b1 20-23 \u03a3\u03b5\u03c0\u03c4\u03b5\u03bc\u03b2\u03c1\u03af\u03bf\u03c5, \u03b1\u03c0\u03bf\u03ba\u03ac\u03bb\u03c5\u03c8\u03b5 \u03c4\u03bf \u039f\u03a1\u0395\u039d. \u03a3\u03cd\u03bc\u03c6\u03c9\u03bd\u03b1 \u03bc\u03b5 \u03c0\u03bb\u03b7\u03c1\u03bf\u03c6\u03bf\u03c1\u03af\u03b5\u03c2 \u03c0\u03bf\u03c5 \u03bc\u03b5\u03c4\u03ad\u03b4\u03c9\u03c3\u03b5 \u03c4\u03bf \u03ba\u03b5\u03bd\u03c4\u03c1\u03b9\u03ba\u03cc \u03b4\u03b5\u03bb\u03c4\u03af\u03bf \u03b5\u03b9\u03b4\u03ae\u03c3\u03b5\u03c9\u03bd, \u03cc\u03c4\u03b1\u03bd \u03c4\u03bf \u03b5\u03bb\u03bb\u03b7\u03bd\u03b9\u03ba\u03cc \u03b5\u03c1\u03b5\u03c5\u03bd\u03b7\u03c4\u03b9\u03ba\u03cc \u00ab \u0391\u0399\u0393\u0391\u0399\u039f \u00bb \u03c0\u03bf\u03c5 \u03b1\u03bd\u03ae\u03ba\u03b5\u03b9 \u03c3\u03c4\u03bf \u0395\u03bb\u03bb\u03b7\u03bd\u03b9\u03ba\u03cc \u039a\u03ad\u03bd\u03c4\u03c1\u03bf \u0398\u03b1\u03bb\u03b1\u03c3\u03c3\u03af\u03c9\u03bd \u0395\u03c1\u03b5\u03c5\u03bd\u03ce\u03bd \u03b2\u03b3\u03ae\u03ba\u03b5 \u03ad\u03be\u03c9 \u03b1\u03c0\u03cc \u03c4\u03b1 6 \u03bd.\u03bc, \u03c3\u03b5 \u03b4\u03b9\u03b5\u03b8\u03bd\u03ae \u03cd\u03b4\u03b1\u03c4\u03b1, \u03c4\u03bf \u03c0\u03c1\u03bf\u03c3\u03ad\u03b3\u03b3\u03b9\u03c3\u03b5 \u03c4\u03bf\u03c5\u03c1\u03ba\u03b9\u03ba\u03cc \u03c0\u03bf\u03bb\u03b5\u03bc\u03b9\u03ba\u03cc \u03c0\u03bb\u03bf\u03af\u03bf, \u03bf \u03ba\u03c5\u03b2\u03b5\u03c1\u03bd\u03ae\u03c4\u03b7\u03c2 \u03c4\u03bf\u03c5 \u03bf\u03c0\u03bf\u03af\u03bf\u03c5 \u03b6\u03ae\u03c4\u03b7\u03c3\u03b5 \u03b4\u03cd\u03bf \u03c6\u03bf\u03c1\u03ad\u03c2 \u03bc\u03ad\u03c3\u03c9 \u03b1\u03c3\u03c5\u03c1\u03bc\u03ac\u03c4\u03bf\u03c5 \u03bd\u03b1 \u03b5\u03bd\u03b7\u03bc\u03b5\u03c1\u03c9\u03b8\u03b5\u03af \u03b3\u03b9\u03b1 \u03c4\u03b1 \u03c3\u03c4\u03bf\u03b9\u03c7\u03b5\u03af\u03b1 \u03c4\u03bf\u03c5 \u03c0\u03bb\u03bf\u03af\u03bf\u03c5, \u03b1\u03bb\u03bb\u03ac \u03ba\u03b1\u03b9 \u03b3\u03b9\u03b1 \u03c4\u03b7\u03bd \u03b1\u03c0\u03bf\u03c3\u03c4\u03bf\u03bb\u03ae \u03c4\u03bf\u03c5. \u039f \u03c0\u03bb\u03bf\u03af\u03b1\u03c1\u03c7\u03bf\u03c2 \u03c4\u03bf\u03c5 \u03b5\u03bb\u03bb\u03b7\u03bd\u03b9\u03ba\u03bf\u03cd \u03b5\u03c1\u03b5\u03c5\u03bd\u03b7\u03c4\u03b9\u03ba\u03bf\u03cd \u03b4\u03b5\u03bd \u03b1\u03c0\u03ac\u03bd\u03c4\u03b7\u03c3\u03b5 \u03ba\u03b1\u03b9 \u03c4\u03b5\u03bb\u03b9\u03ba\u03ac \u03c4\u03bf \u03c4\u03bf\u03c5\u03c1\u03ba\u03b9\u03ba\u03cc \u03c0\u03bf\u03bb\u03b5\u03bc\u03b9\u03ba\u03cc \u03b1\u03c0\u03bf\u03bc\u03b1\u03ba\u03c1\u03cd\u03bd\u03b8\u03b7\u03ba\u03b5.", "example_title": "Topic AFFAIRS"}, {"text": "\u0397 \u03ba\u03c5\u03b2\u03b5\u03c1\u03bd\u03b7\u03c4\u03b9\u03ba\u03ae \u03b1\u03bd\u03b9\u03ba\u03b1\u03bd\u03cc\u03c4\u03b7\u03c4\u03b1 \u03bf\u03b4\u03b7\u03b3\u03b5\u03af \u03c4\u03b7\u03bd \u03c7\u03ce\u03c1\u03b1 \u03c3\u03c4\u03bf \u03c7\u03ac\u03bf\u03c2. \u0397 \u03ba\u03c5\u03b2\u03b5\u03c1\u03bd\u03b7\u03c3\u03b7 \u039c\u03b7\u03c4\u03c3\u03bf\u03c4\u03b1\u03ba\u03b7 \u03b1\u03b4\u03c5\u03bd\u03b1\u03c4\u03b5\u03af \u03bd\u03b1 \u03b4\u03b9\u03b1\u03c7\u03b5\u03b9\u03c1\u03b9\u03c3\u03c4\u03b5\u03af \u03c4\u03b7\u03bd \u03c0\u03b1\u03bd\u03b4\u03b7\u03bc\u03af\u03b1. \u0394\u03b5\u03bd \u03bc\u03c0\u03bf\u03c1\u03b5\u03b9 \u03bf\u03cd\u03c4\u03b5 \u03bd\u03b1 \u03c0\u03b5\u03af\u03c3\u03b5\u03b9 \u03c4\u03bf\u03bd \u03ba\u03cc\u03c3\u03bc\u03bf \u03bd\u03b1 \u03b5\u03bc\u03b2\u03bf\u03bb\u03b9\u03b1\u03c3\u03c4\u03b5\u03af, \u03c0\u03bf\u03c5 \u03ae\u03c4\u03b1\u03bd \u03c4\u03bf \u03c0\u03b9\u03bf \u03b1\u03c0\u03bb\u03bf \u03c0\u03c1\u03ac\u03b3\u03bc\u03b1. \u03a3\u03b7\u03bc\u03b5\u03c1\u03b1 \u03bb\u03bf\u03b9\u03c0\u03cc\u03bd \u03c6\u03c4\u03ac\u03c3\u03b1\u03bc\u03b5 \u03c3\u03c4\u03bf \u03c3\u03b7\u03bc\u03b5\u03af\u03bf \u03bd\u03b1 \u03bc\u03b9\u03bb\u03ac\u03bc\u03b5 \u03b3\u03b9\u03b1 \u03b5\u03c0\u03b1\u03bd\u03b1\u03c6\u03bf\u03c1\u03ac \u03c4\u03b7\u03c2 \u03c7\u03c1\u03ae\u03c3\u03b7\u03c2 \u03bc\u03ac\u03c3\u03ba\u03b1\u03c2 \u03c3\u03b5 \u03b5\u03be\u03c9\u03c4\u03b5\u03c1\u03b9\u03ba\u03bf\u03cd\u03c2 \u03c7\u03ce\u03c1\u03bf\u03c5\u03c2 \u03b1\u03ba\u03cc\u03bc\u03b7 \u03ba\u03b1\u03b9 \u03cc\u03c0\u03bf\u03c5 \u03b4\u03b5\u03bd \u03c5\u03c0\u03ac\u03c1\u03c7\u03b5\u03b9 \u03c3\u03c5\u03b3\u03c7\u03c1\u03c9\u03c4\u03b9\u03c3\u03bc\u03cc\u03c2. \u03a3\u03c4\u03b9\u03c2 \u03c3\u03c5\u03b6\u03b7\u03c4\u03ae\u03c3\u03b5\u03b9\u03c2 \u03c4\u03c9\u03bd \u03b5\u03b9\u03b4\u03b9\u03ba\u03ce\u03bd \u03b8\u03b1 \u03b2\u03c1\u03b5\u03b8\u03b5\u03af \u03b5\u03c0\u03af\u03c3\u03b7\u03c2 \u03c4\u03bf \u03b5\u03bd\u03b4\u03b5\u03c7\u03cc\u03bc\u03b5\u03bd\u03bf \u03b3\u03b9\u03b1 \u03c4\u03bf\u03c0\u03b9\u03ba\u03ac lockdown \u03c3\u03b5 \u03c0\u03b5\u03c1\u03b9\u03bf\u03c7\u03ad\u03c2 \u03bc\u03b5 \u03b2\u03b1\u03c1\u03cd \u03b9\u03b9\u03ba\u03cc \u03c6\u03bf\u03c1\u03c4\u03af\u03bf \u03b3\u03b9\u03b1 \u03bd\u03b1 \u03bc\u03b7\u03bd \u03be\u03b5\u03c6\u03cd\u03b3\u03b5\u03b9 \u03b7 \u03ba\u03b1\u03c4\u03ac\u03c3\u03c4\u03b1\u03c3\u03b7, \u03b5\u03bd\u03ce \u03b8\u03b1 \u03c7\u03c1\u03b5\u03b9\u03ac\u03b6\u03b5\u03c4\u03b1\u03b9 \u03ba\u03ac\u03c0\u03bf\u03b9\u03bf\u03c2 \u03b3\u03b9\u03b1 \u03c4\u03b9\u03c2 \u03bc\u03b5\u03c4\u03b1\u03ba\u03b9\u03bd\u03ae\u03c3\u03b5\u03b9\u03c2 \u03c4\u03bf\u03c5 \u03b5\u03af\u03c4\u03b5 \u03c0\u03b9\u03c3\u03c4\u03bf\u03c0\u03bf\u03b9\u03b7\u03c4\u03b9\u03ba\u03cc \u03b5\u03bc\u03b2\u03bf\u03bb\u03b9\u03b1\u03c3\u03bc\u03bf\u03cd \u03ae \u03bd\u03cc\u03c3\u03b7\u03c3\u03b7\u03c2 \u03ba\u03b1\u03b9 \u03bf\u03b9 \u03b1\u03bd\u03b5\u03bc\u03b2\u03bf\u03bb\u03af\u03b1\u03c3\u03c4\u03bf\u03b9 rapid \u03ae \u03bc\u03bf\u03c1\u03b9\u03b1\u03ba\u03cc \u03c4\u03b5\u03c3\u03c4.", "example_title": "Topic COVID"}, {"text": "\u0397 \u00ab\u03c9\u03c1\u03b1\u03af\u03b1 \u0395\u03bb\u03ad\u03bd\u03b7\u00bb \u03b5\u03c0\u03ad\u03c3\u03c4\u03c1\u03b5\u03c8\u03b5 \u03c3\u03c4\u03b7\u03bd \u03c4\u03b7\u03bb\u03b5\u03cc\u03c1\u03b1\u03c3\u03b7, \u03bc\u03ad\u03c3\u03b1 \u03b1\u03c0\u03cc \u03c4\u03b7 \u03c3\u03c5\u03c7\u03bd\u03cc\u03c4\u03b7\u03c4\u03b1 \u03c4\u03bf\u03c5 MEGA \u03ba\u03b1\u03b9 \u03ac\u03c6\u03b7\u03c3\u03b5 \u03c4\u03b9\u03c2 \u03ba\u03b1\u03bb\u03cd\u03c4\u03b5\u03c1\u03b5\u03c2 \u03b5\u03bd\u03c4\u03c5\u03c0\u03ce\u03c3\u03b5\u03b9\u03c2. \u03a4\u03bf \u03c0\u03bb\u03b1\u03c4\u03cc \u03b1\u03c0\u03cc \u03c4\u03bf \u03bf\u03c0\u03bf\u03af\u03bf \u03b5\u03bc\u03c6\u03b1\u03bd\u03af\u03b6\u03b5\u03c4\u03b1\u03b9 \u03b7 \u0395\u03bb\u03ad\u03bd\u03b7 \u039c\u03b5\u03bd\u03b5\u03b3\u03ac\u03ba\u03b7 \u03ad\u03c7\u03b5\u03b9 \u03c6\u03c4\u03b9\u03b1\u03c7\u03c4\u03b5\u03af \u03b1\u03c0\u03cc \u03c4\u03b7\u03bd \u03b1\u03c1\u03c7\u03ae \u03b3\u03b9\u03b1 \u03c4\u03b7\u03bd \u03b5\u03ba\u03c0\u03bf\u03bc\u03c0\u03ae \u03c4\u03b7\u03c2. \u03a3\u03ae\u03bc\u03b5\u03c1\u03b1, \u03c3\u03c4\u03bf \u03ba\u03bb\u03b5\u03af\u03c3\u03b9\u03bc\u03bf \u03c4\u03b7\u03c2 \u03b5\u03ba\u03c0\u03bf\u03bc\u03c0\u03ae\u03c2 \u03b7 \u0395\u03bb\u03ad\u03bd\u03b7 \u03c0\u03ad\u03c1\u03b1\u03c3\u03b5 \u03b1\u03bd\u03ac\u03bc\u03b5\u03c3\u03b1 \u03b1\u03c0\u03cc \u03c4\u03b9\u03c2 \u03ba\u03ac\u03bc\u03b5\u03c1\u03b5\u03c2 \u03b3\u03b9\u03b1 \u03bd\u03b1 \u03bc\u03c0\u03b5\u03b9 \u03c3\u03c4\u03bf \u03ba\u03b1\u03bc\u03b1\u03c1\u03af\u03bd\u03b9 \u03c4\u03b7\u03c2 \u00ab\u039c\u03b7\u03bd \u03c4\u03c1\u03bf\u03bc\u03bf\u03ba\u03c1\u03b1\u03c4\u03b5\u03af\u03c3\u03c4\u03b5, \u03b5\u03af\u03bc\u03b1\u03b9 \u03b7 \u0395\u03bb\u03ad\u03bd\u03b7 \u039c\u03b5\u03bd\u03b5\u03b3\u03ac\u03ba\u03b7, \u03c4\u03b1 \u03ba\u03ac\u03bd\u03c9 \u03b1\u03c5\u03c4\u03ac. \u039c\u03b5 \u03c3\u03c5\u03b3\u03c7\u03c9\u03c1\u03b5\u03af\u03c4\u03b1\u03b9, \u03ad\u03c7\u03c9 \u03c8\u03c5\u03c7\u03bf\u03bb\u03bf\u03b3\u03b9\u03ba\u03ac \u03b1\u03bd \u03b4\u03b5\u03bd \u03b5\u03af\u03bc\u03b1\u03b9 \u03b5\u03bb\u03b5\u03cd\u03b8\u03b5\u03c1\u03b7\u00bb \u03b5\u03af\u03c0\u03b5 \u03b1\u03c1\u03c7\u03b9\u03ba\u03ac \u03b7 \u03c0\u03b1\u03c1\u03bf\u03c5\u03c3\u03b9\u03ac\u03c3\u03c4\u03c1\u03b9\u03b1 \u03c3\u03c4\u03bf\u03c5\u03c2 \u03c3\u03c5\u03bd\u03b5\u03c1\u03b3\u03ac\u03c4\u03b5\u03c2 \u03c4\u03b7\u03c2 \u03ba\u03b1\u03b9 \u03c0\u03c1\u03cc\u03c3\u03b8\u03b5\u03c3\u03b5 \u03c3\u03c4\u03b7 \u03c3\u03c5\u03bd\u03ad\u03c7\u03b5\u03b9\u03b1: \u00ab\u0397 \u0395\u03bb\u03ad\u03bd\u03b7 \u03bf\u03bb\u03bf\u03ba\u03bb\u03ae\u03c1\u03c9\u03c3\u03b5. \u039c\u03c0\u03bf\u03c1\u03b5\u03af\u03c4\u03b5 \u03bd\u03b1 \u03c3\u03c5\u03bd\u03b5\u03c7\u03af\u03c3\u03b5\u03c4\u03b5 \u03bc\u03b5 \u03c4\u03bf \u03c5\u03c0\u03cc\u03bb\u03bf\u03b9\u03c0\u03bf \u03c0\u03c1\u03cc\u03b3\u03c1\u03b1\u03bc\u03bc\u03b1 \u03c4\u03bf\u03c5 Mega. \u0395\u03b3\u03ce \u03b1\u03bd\u03bf\u03af\u03b3\u03c9 \u03c4\u03bf \u03ba\u03b1\u03bc\u03b1\u03c1\u03af\u03bd\u03b9, \u03b1\u03bd \u03bc\u03b5 \u03b1\u03c6\u03ae\u03c3\u03bf\u03c5\u03bd. \u039c\u03c0\u03b1\u03af\u03bd\u03c9 \u03ba\u03b1\u03bc\u03b1\u03c1\u03af\u03bd\u03b9\u00bb. \u0394\u03b5\u03af\u03c4\u03b5 \u03c4\u03bf \u03b1\u03c0\u03cc\u03c3\u03c0\u03b1\u03c3\u03bc\u03b1!", "example_title": "Topic ENTERTAINMENT"}, {"text": "\u0388\u03bd\u03b1 \u03b5\u03be\u03b1\u03b9\u03c1\u03b5\u03c4\u03b9\u03ba\u03ac \u03b5\u03bd\u03b4\u03b9\u03b1\u03c6\u03ad\u03c1\u03bf\u03bd \u00ab\u03ba\u03bf\u03c5\u03c4\u03c3\u03bf\u03bc\u03c0\u03bf\u03bb\u03b9\u03cc\u00bb \u03b5\u03bd\u03c4\u03cc\u03c0\u03b9\u03c3\u03b1\u03bd \u03bf\u03b9 \u03ba\u03b5\u03c1\u03b1\u03af\u03b5\u03c2 \u03c4\u03b7\u03c2 \u03c3\u03c4\u03ae\u03bb\u03b7\u03c2 \u03c0\u03ad\u03c1\u03b9\u03be \u03c4\u03bf\u03c5 \u039c\u03b5\u03b3\u03ac\u03c1\u03bf\u03c5 \u039c\u03b1\u03be\u03af\u03bc\u03bf\u03c5 : \u03c4\u03bf \u03ba\u03b1\u03c4\u03ac \u03c0\u03cc\u03c3\u03bf\u03bd, \u03b4\u03b7\u03bb\u03b1\u03b4\u03ae, \u03bf \u00ab\u03b5\u03be \u03b1\u03c0\u03bf\u03c1\u03c1\u03ae\u03c4\u03c9\u03bd\u00bb \u03c4\u03bf\u03c5 \u039a\u03c5\u03c1\u03b9\u03ac\u03ba\u03bf\u03c5 \u039c\u03b7\u03c4\u03c3\u03bf\u03c4\u03ac\u03ba\u03b7 , \u0393\u03b9\u03ce\u03c1\u03b3\u03bf\u03c2 \u0393\u03b5\u03c1\u03b1\u03c0\u03b5\u03c4\u03c1\u03af\u03c4\u03b7\u03c2 \u03bc\u03b5\u03c4\u03ad\u03c7\u03b5\u03b9 \u03c3\u03c4\u03b7 \u03b4\u03b9\u03b1\u03c7\u03b5\u03af\u03c1\u03b9\u03c3\u03b7 \u03c4\u03b7\u03c2 \u03c0\u03b1\u03bd\u03b4\u03b7\u03bc\u03af\u03b1\u03c2 \u03ba\u03b1\u03b9 \u03c3\u03c4\u03b7\u03bd \u03b4\u03b9\u03b1\u03b4\u03b9\u03ba\u03b1\u03c3\u03af\u03b1 \u03bb\u03ae\u03c8\u03b7\u03c2 \u03b1\u03c0\u03bf\u03c6\u03ac\u03c3\u03b5\u03c9\u03bd. \u03a4\u03bf \u03b5\u03bd \u03bb\u03cc\u03b3\u03c9 \u00ab\u03ba\u03bf\u03c5\u03c4\u03c3\u03bf\u03bc\u03c0\u03bf\u03bb\u03b9\u03cc\u00bb \u03c0\u03c5\u03c1\u03bf\u03b4\u03cc\u03c4\u03b7\u03c3\u03b5 \u03c4\u03bf \u03b3\u03b5\u03b3\u03bf\u03bd\u03cc\u03c2 \u03cc\u03c4\u03b9 \u03c3\u03b5 \u03c3\u03b1\u03b2\u03b2\u03b1\u03c4\u03b9\u03ac\u03c4\u03b9\u03ba\u03b7 \u03b5\u03c6\u03b7\u03bc\u03b5\u03c1\u03af\u03b4\u03b1 \u03b4\u03b7\u03bc\u03bf\u03c3\u03b9\u03b5\u03cd\u03b8\u03b7\u03ba\u03b1\u03bd \u03c0\u03c1\u03bf\u03c7\u03b8\u03ad\u03c2 \u03b4\u03b7\u03bb\u03ce\u03c3\u03b5\u03b9\u03c2 \u03c4\u03bf\u03c5 \u03c5\u03c0\u03bf\u03c5\u03c1\u03b3\u03bf\u03cd \u0395\u03c0\u03b9\u03ba\u03c1\u03b1\u03c4\u03b5\u03af\u03b1\u03c2 \u03bc\u03b5 \u03c4\u03b9\u03c2 \u03bf\u03c0\u03bf\u03af\u03b5\u03c2 \u03b1\u03c0\u03ad\u03ba\u03bb\u03b5\u03b9\u03b5 \u03ba\u03ac\u03b8\u03b5 \u03c3\u03b5\u03bd\u03ac\u03c1\u03b9\u03bf \u03bd\u03ad\u03c9\u03bd \u03bf\u03c1\u03b9\u03b6\u03cc\u03bd\u03c4\u03b9\u03c9\u03bd \u03bc\u03ad\u03c4\u03c1\u03c9\u03bd \u03ba\u03b1\u03b9 \u03c4\u03b7\u03bd \u03af\u03b4\u03b9\u03b1 \u03ce\u03c1\u03b1, \u03c4\u03bf \u039c\u03b1\u03be\u03af\u03bc\u03bf\u03c5 \u03b1\u03bd\u03ae\u03b3\u03b3\u03b5\u03bb\u03bb\u03b5\u2026 \u03ba\u03b1\u03c1\u03b1\u03bd\u03c4\u03af\u03bd\u03b1 \u03c3\u03c4\u03b7 \u039c\u03cd\u03ba\u03bf\u03bd\u03bf. \u00ab\u0395\u03af\u03bd\u03b1\u03b9 \u03b1\u03c5\u03c4\u03bf\u03bd\u03cc\u03b7\u03c4\u03bf \u03cc\u03c4\u03b9 \u03b7 \u03ba\u03bf\u03b9\u03bd\u03c9\u03bd\u03af\u03b1 \u03ba\u03b1\u03b9 \u03b7 \u03bf\u03b9\u03ba\u03bf\u03bd\u03bf\u03bc\u03af\u03b1 \u03b4\u03b5\u03bd \u03b1\u03bd\u03c4\u03ad\u03c7\u03bf\u03c5\u03bd \u03bf\u03c1\u03b9\u03b6\u03cc\u03bd\u03c4\u03b9\u03bf\u03c5\u03c2 \u03c0\u03b5\u03c1\u03b9\u03bf\u03c1\u03b9\u03c3\u03bc\u03bf\u03cd\u03c2\u00bb, \u03ad\u03bb\u03b5\u03b3\u03b5 \u03c7\u03b1\u03c1\u03b1\u03ba\u03c4\u03b7\u03c1\u03b9\u03c3\u03c4\u03b9\u03ba\u03ac \u03bf \u0393\u03b5\u03c1\u03b1\u03c0\u03b5\u03c4\u03c1\u03af\u03c4\u03b7\u03c2, \u03c4\u03b7\u03bd \u03ce\u03c1\u03b1 \u03c0\u03bf\u03c5 \u03b7 \u03ba\u03c5\u03b2\u03ad\u03c1\u03bd\u03b7\u03c3\u03b7 \u03b1\u03bd\u03b1\u03ba\u03bf\u03af\u03bd\u03c9\u03bd\u03b5\u2026 \u03b1\u03c5\u03c4\u03bf\u03cd\u03c2 \u03c4\u03bf\u03c5\u03c2 \u03bf\u03c1\u03b9\u03b6\u03cc\u03bd\u03c4\u03b9\u03bf\u03c5\u03c2 \u03c0\u03b5\u03c1\u03b9\u03bf\u03c1\u03b9\u03c3\u03bc\u03bf\u03cd\u03c2. \u03a9\u03c2 \u03b5\u03ba \u03c4\u03bf\u03cd\u03c4\u03c9\u03bd, \u03b4\u03cd\u03bf \u03c4\u03b9\u03bd\u03ac \u03bc\u03c0\u03bf\u03c1\u03b5\u03af \u03bd\u03b1 \u03c3\u03c5\u03bc\u03b2\u03b1\u03af\u03bd\u03bf\u03c5\u03bd: \u03b5\u03af\u03c4\u03b5 \u03bf \u03c5\u03c0\u03bf\u03c5\u03c1\u03b3\u03cc\u03c2 \u0395\u03c0\u03b9\u03ba\u03c1\u03b1\u03c4\u03b5\u03af\u03b1\u03c2 \u03b4\u03b5\u03bd \u03bc\u03b5\u03c4\u03ad\u03c7\u03b5\u03b9 \u03c0\u03bb\u03ad\u03bf\u03bd \u03c3\u03c4\u03b7 \u03bb\u03ae\u03c8\u03b7 \u03c4\u03c9\u03bd \u03b1\u03c0\u03bf\u03c6\u03ac\u03c3\u03b5\u03c9\u03bd, \u03b5\u03af\u03c4\u03b5 \u03b7 \u03b1\u03c0\u03cc\u03c6\u03b1\u03c3\u03b7 \u03b3\u03b9\u03b1 \u03bf\u03c1\u03b9\u03b6\u03cc\u03bd\u03c4\u03b9\u03b1 \u03bc\u03ad\u03c4\u03c1\u03b1 \u03b5\u03bb\u03ae\u03c6\u03b8\u03b7 \u03c5\u03c0\u03cc \u03c4\u03bf \u03ba\u03c1\u03ac\u03c4\u03bf\u03c2 \u03c0\u03b1\u03bd\u03b9\u03ba\u03bf\u03cd \u03c4\u03bf \u03c0\u03c1\u03c9\u03af \u03c4\u03bf\u03c5 \u03a3\u03b1\u03b2\u03b2\u03ac\u03c4\u03bf\u03c5, \u03cc\u03c4\u03b1\u03bd \u03ad\u03c6\u03c4\u03b1\u03c3\u03b5 \u03c3\u03c4\u03bf \u039c\u03b1\u03be\u03af\u03bc\u03bf\u03c5 \u03b7 \u03c4\u03b5\u03bb\u03b5\u03c5\u03c4\u03b1\u03af\u03b1 \u00ab\u03c6\u03bf\u03c5\u03c1\u03bd\u03b9\u03ac\u00bb \u03c4\u03c9\u03bd \u03b5\u03c0\u03b9\u03b4\u03b7\u03bc\u03b9\u03bf\u03bb\u03bf\u03b3\u03b9\u03ba\u03ce\u03bd \u03b4\u03b5\u03b4\u03bf\u03bc\u03ad\u03bd\u03c9\u03bd \u03b3\u03b9\u03b1 \u03c4\u03bf \u03bd\u03b7\u03c3\u03af \u03c4\u03c9\u03bd \u03b1\u03bd\u03ad\u03bc\u03c9\u03bd\u2026", "example_title": "Topic NON_PAPER"}, {"text": "\u0395\u03af\u03bd\u03b1\u03b9 \u03be\u03b5\u03ba\u03ac\u03b8\u03b1\u03c1\u03bf \u03cc\u03c4\u03b9 \u03bc\u03b5\u03c4\u03ac \u03c4\u03bf \u03c0\u03bb\u03ae\u03b3\u03bc\u03b1 \u03c0\u03bf\u03c5 \u03b4\u03ad\u03c7\u03b8\u03b7\u03ba\u03b5 \u03b7 \u03ba\u03c5\u03b2\u03ad\u03c1\u03bd\u03b7\u03c3\u03ae \u03c4\u03bf\u03c5 \u03b1\u03c0\u03cc \u03c4\u03b9\u03c2 \u03b1\u03b4\u03c5\u03bd\u03b1\u03bc\u03af\u03b5\u03c2 \u03c3\u03c4\u03b7\u03bd \u03b1\u03bd\u03c4\u03b9\u03bc\u03b5\u03c4\u03ce\u03c0\u03b9\u03c3\u03b7 \u03c4\u03c9\u03bd \u03ba\u03b1\u03c4\u03b1\u03c3\u03c4\u03c1\u03bf\u03c6\u03b9\u03ba\u03ce\u03bd \u03c0\u03c5\u03c1\u03ba\u03b1\u03b3\u03b9\u03ce\u03bd \u03c4\u03bf \u03bc\u03b5\u03b3\u03ac\u03bb\u03bf \u03c3\u03c4\u03bf\u03af\u03c7\u03b7\u03bc\u03b1 \u03b3\u03b9\u03b1 \u03c4\u03bf\u03bd \u039a\u03c5\u03c1\u03b9\u03ac\u03ba\u03bf \u039c\u03b7\u03c4\u03c3\u03bf\u03c4\u03ac\u03ba\u03b7 \u03b5\u03af\u03bd\u03b1\u03b9 \u03bd\u03b1 \u03c0\u03c1\u03bf\u03c7\u03c9\u03c1\u03ae\u03c3\u03b5\u03b9 \u03c3\u03c5\u03bd\u03c4\u03b5\u03c4\u03b1\u03b3\u03bc\u03ad\u03bd\u03b1 \u03ba\u03b1\u03b9 \u03c7\u03c9\u03c1\u03af\u03c2 \u03c0\u03b1\u03c1\u03b1\u03c4\u03c1\u03ac\u03b3\u03bf\u03c5\u03b4\u03b1 \u03bf \u03c3\u03c7\u03b5\u03b4\u03b9\u03b1\u03c3\u03bc\u03cc\u03c2 \u03b3\u03b9\u03b1 \u03c4\u03b7\u03bd \u03b1\u03c0\u03bf\u03ba\u03b1\u03c4\u03ac\u03c3\u03c4\u03b1\u03c3\u03b7 \u03c4\u03c9\u03bd \u03b6\u03b7\u03bc\u03b9\u03ce\u03bd. \u039f \u03a0\u03c1\u03c9\u03b8\u03c5\u03c0\u03bf\u03c5\u03c1\u03b3\u03cc\u03c2 \u03ad\u03c7\u03b5\u03b9 \u03ae\u03b4\u03b7 \u03c6\u03c4\u03b9\u03ac\u03be\u03b5\u03b9 \u03bc\u03b9\u03b1 \u03bf\u03bc\u03ac\u03b4\u03b1 \u03ba\u03c1\u03bf\u03cd\u03c3\u03b7\u03c2 \u03c4\u03b7\u03bd \u03bf\u03c0\u03bf\u03af\u03b1 \u03b1\u03c0\u03bf\u03c4\u03b5\u03bb\u03bf\u03cd\u03bd 9 \u03c5\u03c0\u03bf\u03c5\u03c1\u03b3\u03bf\u03af. \u03a4\u03b1 \u03bc\u03ad\u03bb\u03b7 \u03c0\u03bf\u03c5 \u03b1\u03c0\u03b1\u03c1\u03c4\u03af\u03b6\u03bf\u03c5\u03bd \u03c4\u03b7\u03bd \u03bf\u03bc\u03ac\u03b4\u03b1 \u03ba\u03c1\u03bf\u03cd\u03c3\u03b7\u03c2 \u03ba\u03b1\u03b9 \u03c4\u03b1 \u03bf\u03c0\u03bf\u03af\u03b1 \u03b2\u03c1\u03af\u03c3\u03ba\u03bf\u03bd\u03c4\u03b1\u03b9 \u03c3\u03b5 \u03c3\u03c5\u03bd\u03b5\u03c7\u03ae, \u03ba\u03b1\u03b8\u03b7\u03bc\u03b5\u03c1\u03b9\u03bd\u03ae \u03b5\u03c0\u03b1\u03c6\u03ae \u03bc\u03b5 \u03c4\u03bf\u03bd \u039a\u03c5\u03c1\u03b9\u03ac\u03ba\u03bf \u039c\u03b7\u03c4\u03c3\u03bf\u03c4\u03ac\u03ba\u03b7 \u03b5\u03af\u03bd\u03b1\u03b9, \u03cc\u03c0\u03c9\u03c2 \u03bc\u03b1\u03c2 \u03c0\u03bb\u03b7\u03c1\u03bf\u03c6\u03bf\u03c1\u03b5\u03af \u03b7 \u03c3\u03c4\u03ae\u03bb\u03b7 \u00ab\u0398\u03b5\u03c9\u03c1\u03b5\u03af\u03bf\u00bb \u03c4\u03b7\u03c2 \u00ab\u039a\u03b1\u03b8\u03b7\u03bc\u03b5\u03c1\u03b9\u03bd\u03ae\u03c2\u00bb \u03b5\u03af\u03bd\u03b1\u03b9 \u03bf\u03b9: \u0393. \u0393\u03b5\u03c1\u03b1\u03c0\u03b5\u03c4\u03c1\u03af\u03c4\u03b7\u03c2, \u0391. \u03a3\u03ba\u03ad\u03c1\u03c4\u03c3\u03bf\u03c2, \u03a7\u03c1. \u03a4\u03c1\u03b9\u03b1\u03bd\u03c4\u03cc\u03c0\u03bf\u03c5\u03bb\u03bf\u03c2, \u039a. \u039a\u03b1\u03c1\u03b1\u03bc\u03b1\u03bd\u03bb\u03ae\u03c2, \u039a. \u03a3\u03ba\u03c1\u03ad\u03ba\u03b1\u03c2, \u03a3\u03c4. \u03a0\u03ad\u03c4\u03c3\u03b1\u03c2, \u03a3\u03c0. \u039b\u03b9\u03b2\u03b1\u03bd\u03cc\u03c2 \u03ba\u03b1\u03b9 \u03c6\u03c5\u03c3\u03b9\u03ba\u03ac \u03bf\u03b9 \u03a7\u03c1. \u03a3\u03c4\u03b1\u03b9\u03ba\u03bf\u03cd\u03c1\u03b1\u03c2 \u03ba\u03b1\u03b9 \u0398. \u03a3\u03ba\u03c5\u03bb\u03b1\u03ba\u03ac\u03ba\u03b7\u03c2.", "example_title": "Topic OPINION"}]}
cvcio/mediawatch-el-topics
null
[ "transformers", "pytorch", "safetensors", "roberta", "text-classification", "Greek", "news", "el", "doi:10.57967/hf/0711", "license:gpl-3.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "el" ]
TAGS #transformers #pytorch #safetensors #roberta #text-classification #Greek #news #el #doi-10.57967/hf/0711 #license-gpl-3.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
Disclaimer: *This model is still under testing and may change in the future, we will try to keep backwards compatibility. For any questions reach us at info@URL* MediaWatch News Topics (Greek) ============================== Fine-tuned model for multi-label text-classification (SequenceClassification), based on roberta-el-news, using Hugging Face's Transformers library. This model is to classify news in real-time on upto 33 topics including: *AFFAIRS*, *AGRICULTURE*, *ARTS\_AND\_CULTURE*, *BREAKING\_NEWS*, *BUSINESS*, *COVID*, *ECONOMY*, *EDUCATION*, *ELECTIONS*, *ENTERTAINMENT*, *ENVIRONMENT*, *FOOD*, *HEALTH*, *INTERNATIONAL*, *LAW\_AND\_ORDER*, *MILITARY*, *NON\_PAPER*, *OPINION*, *POLITICS*, *REFUGEE*, *REGIONAL*, *RELIGION*, *SCIENCE*, *SOCIAL\_MEDIA*, *SOCIETY*, *SPORTS*, *TECH*, *TOURISM*, *TRANSPORT*, *TRAVEL*, *WEATHER*, *CRIME*, *JUSTICE*. How to use ---------- You can use this model directly with a pipeline for text-classification: Labels ------ All labels, except *NON\_PAPER*, retrieved by source articles during the data collection step, without any preprocessing, assuming that journalists and newsrooms assign correct tags to the articles. We disregarded all articles with more than 6 tags to reduce bias and tag manipulation. Pretraining ----------- The model was pretrained using an NVIDIA A10 GPU for 15 epochs (~ approx 59K steps, 8 hours training) with a batch size of 128. The optimizer used is Adam with a learning rate of 1e-5, and weight decay 0.01. We used roc\_auc\_micro to evaluate the results. ### Framework versions * Transformers 4.13.0 * Pytorch 1.9.0+cu111 * Datasets 1.16.1 * Tokenizers 0.10.3 Authors ------- Dimitris Papaevagelou - @andefined About Us -------- Civic Information Office is a Non Profit Organization based in Athens, Greece focusing on creating technology and research products for the public interest.
[ "### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.9.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3\n\n\nAuthors\n-------\n\n\nDimitris Papaevagelou - @andefined\n\n\nAbout Us\n--------\n\n\nCivic Information Office is a Non Profit Organization based in Athens, Greece focusing on creating technology and research products for the public interest." ]
[ "TAGS\n#transformers #pytorch #safetensors #roberta #text-classification #Greek #news #el #doi-10.57967/hf/0711 #license-gpl-3.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.9.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3\n\n\nAuthors\n-------\n\n\nDimitris Papaevagelou - @andefined\n\n\nAbout Us\n--------\n\n\nCivic Information Office is a Non Profit Organization based in Athens, Greece focusing on creating technology and research products for the public interest." ]
fill-mask
transformers
# RoBERTa Greek base model Pretrained model on Greek language with the Masked Language Modeling (MLM) objective using [Hugging Face's](https://huggingface.co/) [Transformers](https://github.com/huggingface/transformers) library. This model is *NOT* case-sensitive and all Greek diacritics retained. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python # example url # https://www.news247.gr/politiki/misologa-maximoy-gia-tin-ekthesi-tsiodra-lytra-gia-ti-thnitotita-ektos-meth.9462425.html # not present in train/eval set from transformers import pipeline pipe = pipeline('fill-mask', model='cvcio/roberta-el-news') pipe( 'Η κυβέρνηση μουδιασμένη από τη <mask> της έκθεσης Τσιόδρα-Λύτρα, ' 'επιχειρεί χωρίς να απαντά ουσιαστικά να ρίξει ευθύνες στον ΣΥΡΙΖΑ, που κυβερνούσε πριν... 2 χρόνια.' ) # outputs [ { 'sequence': 'Η κυβέρνηση μουδιασμένη από τη δημοσιοποίηση της έκθεσης Τσιόδρα-Λύτρα, επιχειρεί χωρίς να απαντά ουσιαστικά να ρίξει ευθύνες στον ΣΥΡΙΖΑ, που κυβερνούσε πριν... 2 χρόνια.', 'score': 0.5881184339523315, 'token': 20235, 'token_str': ' δημοσιοποίηση' }, { 'sequence': 'Η κυβέρνηση μουδιασμένη από τη δημοσίευση της έκθεσης Τσιόδρα-Λύτρα, επιχειρεί χωρίς να απαντά ουσιαστικά να ρίξει ευθύνες στον ΣΥΡΙΖΑ, που κυβερνούσε πριν... 2 χρόνια.', 'score': 0.05952141433954239, 'token': 9696, 'token_str': ' δημοσίευση' }, { 'sequence': 'Η κυβέρνηση μουδιασμένη από τη διαχείριση της έκθεσης Τσιόδρα-Λύτρα, επιχειρεί χωρίς να απαντά ουσιαστικά να ρίξει ευθύνες στον ΣΥΡΙΖΑ, που κυβερνούσε πριν... 2 χρόνια.', 'score': 0.029887061566114426, 'token': 4315, 'token_str': ' διαχείριση' }, { 'sequence': 'Η κυβέρνηση μουδιασμένη από τη διαρροή της έκθεσης Τσιόδρα-Λύτρα, επιχειρεί χωρίς να απαντά ουσιαστικά να ρίξει ευθύνες στον ΣΥΡΙΖΑ, που κυβερνούσε πριν... 2 χρόνια.', 'score': 0.022848669439554214, 'token': 24940, 'token_str': ' διαρροή' }, { 'sequence': 'Η κυβέρνηση μουδιασμένη από τη ματαίωση της έκθεσης Τσιόδρα-Λύτρα, επιχειρεί χωρίς να απαντά ουσιαστικά να ρίξει ευθύνες στον ΣΥΡΙΖΑ, που κυβερνούσε πριν... 2 χρόνια.', 'score': 0.01729060709476471, 'token': 46913, 'token_str': ' ματαίωση' } ] ``` ## Training data The model was pretrained on 8 millon unique news articles (~ approx 160M sentences, 33GB of text), collected with [MediaWatch](https://mediawatch.io/), from October 2016 upto December 2021. ## Preprocessing The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,265. During the preprocessing we only unescaped html text to the correspoing Unicode characters (ex. `&amp;` => `&`). ## Pretraining The model was pretrained using an NVIDIA A10 GPU for 3 epochs (~ approx 760K steps, 182 hours) with a batch size of 14 (x2 gradient accumulation steps = 28) and a sequence length of 512 tokens. The optimizer used is Adam with a learning rate of 5e-5, and linear decay of the learning rate. ### Training results | epochs | steps | train/train_loss | train/loss | eval/loss | |-------:|--------:|-----------------:|------------:|----------:| | 3 | 765,414 | 0.3960 | 1.2356 | 0.9028 | ### Evaluation results The model fine-tuned on ner task using the [elNER](https://github.com/nmpartzio/elner) dataset and achieved the following results: | task | epochs | lr | batch | dataset | precision | recall | f1 | accuracy | |-----:|-------:|-----:|------:|--------:|----------:|-------:|-------:|---------:| | ner | 5 | 1e-5 | 16/16 | elNER4 | 0.8954 | 0.9280 | 0.9114 | 0.9872 | | ner | 5 | 1e-4 | 16/16 | elNER18 | 0.9069 | 0.9268 | 0.9168 | 0.9823 | ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-5 - train_batch_size: 14 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 28 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.13.0 - Pytorch 1.9.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3 ## Authors Dimitris Papaevagelou - [@andefined](https://github.com/andefined) ## About Us [Civic Information Office](https://cvcio.org/) is a Non Profit Organization based in Athens, Greece focusing on creating technology and research products for the public interest.
{"language": "el", "license": "gpl-3.0", "tags": ["generated_from_trainer", "roberta", "Greek", "news", "transformers"], "widget": [{"text": "\u0397 \u03ba\u03c5\u03b2\u03ad\u03c1\u03bd\u03b7\u03c3\u03b7 \u03bc\u03bf\u03c5\u03b4\u03b9\u03b1\u03c3\u03bc\u03ad\u03bd\u03b7 \u03b1\u03c0\u03cc \u03c4\u03b7 <mask> \u03c4\u03b7\u03c2 \u03ad\u03ba\u03b8\u03b5\u03c3\u03b7\u03c2 \u03a4\u03c3\u03b9\u03cc\u03b4\u03c1\u03b1-\u039b\u03cd\u03c4\u03c1\u03b1, \u03b5\u03c0\u03b9\u03c7\u03b5\u03b9\u03c1\u03b5\u03af \u03c7\u03c9\u03c1\u03af\u03c2 \u03bd\u03b1 \u03b1\u03c0\u03b1\u03bd\u03c4\u03ac \u03bf\u03c5\u03c3\u03b9\u03b1\u03c3\u03c4\u03b9\u03ba\u03ac \u03bd\u03b1 \u03c1\u03af\u03be\u03b5\u03b9 \u03b5\u03c5\u03b8\u03cd\u03bd\u03b5\u03c2 \u03c3\u03c4\u03bf\u03bd \u03a3\u03a5\u03a1\u0399\u0396\u0391, \u03c0\u03bf\u03c5 \u03ba\u03c5\u03b2\u03b5\u03c1\u03bd\u03bf\u03cd\u03c3\u03b5 \u03c0\u03c1\u03b9\u03bd... 2 \u03c7\u03c1\u03cc\u03bd\u03b9\u03b1."}], "model-index": [{"name": "roberta-el-news", "results": []}]}
cvcio/roberta-el-news
null
[ "transformers", "pytorch", "safetensors", "roberta", "fill-mask", "generated_from_trainer", "Greek", "news", "el", "doi:10.57967/hf/0712", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "el" ]
TAGS #transformers #pytorch #safetensors #roberta #fill-mask #generated_from_trainer #Greek #news #el #doi-10.57967/hf/0712 #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us
RoBERTa Greek base model ======================== Pretrained model on Greek language with the Masked Language Modeling (MLM) objective using Hugging Face's Transformers library. This model is *NOT* case-sensitive and all Greek diacritics retained. ### How to use You can use this model directly with a pipeline for masked language modeling: Training data ------------- The model was pretrained on 8 millon unique news articles (~ approx 160M sentences, 33GB of text), collected with MediaWatch, from October 2016 upto December 2021. Preprocessing ------------- The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,265. During the preprocessing we only unescaped html text to the correspoing Unicode characters (ex. '&' => '&'). Pretraining ----------- The model was pretrained using an NVIDIA A10 GPU for 3 epochs (~ approx 760K steps, 182 hours) with a batch size of 14 (x2 gradient accumulation steps = 28) and a sequence length of 512 tokens. The optimizer used is Adam with a learning rate of 5e-5, and linear decay of the learning rate. ### Training results ### Evaluation results The model fine-tuned on ner task using the elNER dataset and achieved the following results: ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-5 * train\_batch\_size: 14 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 28 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3.0 ### Framework versions * Transformers 4.13.0 * Pytorch 1.9.0+cu111 * Datasets 1.16.1 * Tokenizers 0.10.3 Authors ------- Dimitris Papaevagelou - @andefined About Us -------- Civic Information Office is a Non Profit Organization based in Athens, Greece focusing on creating technology and research products for the public interest.
[ "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nTraining data\n-------------\n\n\nThe model was pretrained on 8 millon unique news articles (~ approx 160M sentences, 33GB of text), collected with MediaWatch, from October 2016 upto December 2021.\n\n\nPreprocessing\n-------------\n\n\nThe texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,265. During the preprocessing we only unescaped html text to the correspoing Unicode characters (ex. '&' => '&').\n\n\nPretraining\n-----------\n\n\nThe model was pretrained using an NVIDIA A10 GPU for 3 epochs (~ approx 760K steps, 182 hours) with a batch size of 14 (x2 gradient accumulation steps = 28) and a sequence length of 512 tokens. The optimizer used is Adam with a learning rate of 5e-5, and linear decay of the learning rate.", "### Training results", "### Evaluation results\n\n\nThe model fine-tuned on ner task using the elNER dataset and achieved the following results:", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-5\n* train\\_batch\\_size: 14\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 28\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.9.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3\n\n\nAuthors\n-------\n\n\nDimitris Papaevagelou - @andefined\n\n\nAbout Us\n--------\n\n\nCivic Information Office is a Non Profit Organization based in Athens, Greece focusing on creating technology and research products for the public interest." ]
[ "TAGS\n#transformers #pytorch #safetensors #roberta #fill-mask #generated_from_trainer #Greek #news #el #doi-10.57967/hf/0712 #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nTraining data\n-------------\n\n\nThe model was pretrained on 8 millon unique news articles (~ approx 160M sentences, 33GB of text), collected with MediaWatch, from October 2016 upto December 2021.\n\n\nPreprocessing\n-------------\n\n\nThe texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,265. During the preprocessing we only unescaped html text to the correspoing Unicode characters (ex. '&' => '&').\n\n\nPretraining\n-----------\n\n\nThe model was pretrained using an NVIDIA A10 GPU for 3 epochs (~ approx 760K steps, 182 hours) with a batch size of 14 (x2 gradient accumulation steps = 28) and a sequence length of 512 tokens. The optimizer used is Adam with a learning rate of 5e-5, and linear decay of the learning rate.", "### Training results", "### Evaluation results\n\n\nThe model fine-tuned on ner task using the elNER dataset and achieved the following results:", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-5\n* train\\_batch\\_size: 14\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 28\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.9.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3\n\n\nAuthors\n-------\n\n\nDimitris Papaevagelou - @andefined\n\n\nAbout Us\n--------\n\n\nCivic Information Office is a Non Profit Organization based in Athens, Greece focusing on creating technology and research products for the public interest." ]
fill-mask
transformers
# Greek RoBERTa Uncased (v1) Pretrained model on Greek language using a masked language modeling (MLM) objective using [Hugging Face's](https://huggingface.co/) [Transformers](https://github.com/huggingface/transformers) library. This model is case-sensitive and has no Greek diacritics (uncased, no-accents). ### Training data This model was pretrained on almost 18M unique tweets, all Greek, collected between 2008-2021, from almost 450K distinct users. ### Preprocessing The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50256. For the tokenizer we splited strings containing any numbers (ex. EU2019 ==> EU 2019). The tweet normalization logic described in the example listed bellow. ```python import unicodedata from transformers import pipeline def normalize_tweet(tweet, do_lower = True, do_strip_accents = True, do_split_word_numbers = False, user_fill = '', url_fill = ''): # your tweet pre-processing logic goes here # example... # remove extra spaces, escape HTML, replace non-standard punctuation # replace any @user with blank # replace any link with blank # explode hashtags to strings (ex. #EU2019 ==> EU 2019) # remove all emojis # if do_split_word_numbers: # splited strings containing any numbers # standardize punctuation # remove unicode symbols if do_lower: tweet = tweet.lower() if do_strip_accents: tweet = strip_accents(tweet) return tweet.strip() def strip_accents(s): return ''.join(c for c in unicodedata.normalize('NFD', s) if unicodedata.category(c) != 'Mn') nlp = pipeline('fill-mask', model = 'cvcio/roberta-el-uncased-twitter-v1') print( nlp( normalize_tweet( '<mask>: Μεγάλη υποχώρηση του ιικού φορτίου σε Αττική και Θεσσαλονίκη' ) ) ) ``` ### Pretraining The model was pretrained on a T4 GPU for 1.2M steps with a batch size of 96 and a sequence length of 96. The optimizer used is Adam with a learning rate of 1e-5, gradient accumulation steps of 8, learning rate warmup for 50000 steps and linear decay of the learning rate after. ### Authors Dimitris Papaevagelou - [@andefined](https://github.com/andefined) ### About Us [Civic Information Office](https://cvcio.org/) is a Non Profit Organization based in Athens, Greece focusing on creating technology and research products for the public interest.
{"language": "el", "tags": ["roberta", "twitter", "Greek"], "widget": [{"text": "<mask>: \u03bc\u03b5\u03b3\u03b1\u03bb\u03b7 \u03c5\u03c0\u03bf\u03c7\u03c9\u03c1\u03b7\u03c3\u03b7 \u03c4\u03bf\u03c5 \u03b9\u03b9\u03ba\u03bf\u03c5 \u03c6\u03bf\u03c1\u03c4\u03b9\u03bf\u03c5 \u03c3\u03b5 \u03b1\u03c4\u03c4\u03b9\u03ba\u03b7 \u03ba\u03b1\u03b9 \u03b8\u03b5\u03c3\u03c3\u03b1\u03bb\u03bf\u03bd\u03b9\u03ba\u03b7"}]}
cvcio/roberta-el-uncased-twitter-v1
null
[ "transformers", "pytorch", "safetensors", "roberta", "fill-mask", "twitter", "Greek", "el", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "el" ]
TAGS #transformers #pytorch #safetensors #roberta #fill-mask #twitter #Greek #el #autotrain_compatible #endpoints_compatible #region-us
# Greek RoBERTa Uncased (v1) Pretrained model on Greek language using a masked language modeling (MLM) objective using Hugging Face's Transformers library. This model is case-sensitive and has no Greek diacritics (uncased, no-accents). ### Training data This model was pretrained on almost 18M unique tweets, all Greek, collected between 2008-2021, from almost 450K distinct users. ### Preprocessing The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50256. For the tokenizer we splited strings containing any numbers (ex. EU2019 ==> EU 2019). The tweet normalization logic described in the example listed bellow. ### Pretraining The model was pretrained on a T4 GPU for 1.2M steps with a batch size of 96 and a sequence length of 96. The optimizer used is Adam with a learning rate of 1e-5, gradient accumulation steps of 8, learning rate warmup for 50000 steps and linear decay of the learning rate after. ### Authors Dimitris Papaevagelou - @andefined ### About Us Civic Information Office is a Non Profit Organization based in Athens, Greece focusing on creating technology and research products for the public interest.
[ "# Greek RoBERTa Uncased (v1)\n\nPretrained model on Greek language using a masked language modeling (MLM) objective using Hugging Face's Transformers library. This model is case-sensitive and has no Greek diacritics (uncased, no-accents).", "### Training data\n\nThis model was pretrained on almost 18M unique tweets, all Greek, collected between 2008-2021, from almost 450K distinct users.", "### Preprocessing\n\nThe texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50256. For the tokenizer we splited strings containing any numbers (ex. EU2019 ==> EU 2019). The tweet normalization logic described in the example listed bellow.", "### Pretraining\n\nThe model was pretrained on a T4 GPU for 1.2M steps with a batch size of 96 and a sequence length of 96. The optimizer used is Adam with a learning rate of 1e-5, gradient accumulation steps of 8, learning rate warmup for 50000 steps and linear decay of the learning rate after.", "### Authors\n\nDimitris Papaevagelou - @andefined", "### About Us\n\nCivic Information Office is a Non Profit Organization based in Athens, Greece focusing on creating technology and research products for the public interest." ]
[ "TAGS\n#transformers #pytorch #safetensors #roberta #fill-mask #twitter #Greek #el #autotrain_compatible #endpoints_compatible #region-us \n", "# Greek RoBERTa Uncased (v1)\n\nPretrained model on Greek language using a masked language modeling (MLM) objective using Hugging Face's Transformers library. This model is case-sensitive and has no Greek diacritics (uncased, no-accents).", "### Training data\n\nThis model was pretrained on almost 18M unique tweets, all Greek, collected between 2008-2021, from almost 450K distinct users.", "### Preprocessing\n\nThe texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50256. For the tokenizer we splited strings containing any numbers (ex. EU2019 ==> EU 2019). The tweet normalization logic described in the example listed bellow.", "### Pretraining\n\nThe model was pretrained on a T4 GPU for 1.2M steps with a batch size of 96 and a sequence length of 96. The optimizer used is Adam with a learning rate of 1e-5, gradient accumulation steps of 8, learning rate warmup for 50000 steps and linear decay of the learning rate after.", "### Authors\n\nDimitris Papaevagelou - @andefined", "### About Us\n\nCivic Information Office is a Non Profit Organization based in Athens, Greece focusing on creating technology and research products for the public interest." ]
token-classification
transformers
## Hello World
{}
cwtpc/wangchanberta-ner-8989
null
[ "transformers", "pytorch", "camembert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #camembert #token-classification #autotrain_compatible #endpoints_compatible #region-us
## Hello World
[ "## Hello World" ]
[ "TAGS\n#transformers #pytorch #camembert #token-classification #autotrain_compatible #endpoints_compatible #region-us \n", "## Hello World" ]
null
transformers
## Cyclone Chinese NER This model provides simplified Chinese NER model based on pretrained model BERT (specifically BERT + CRF) Currently, we only support 8 general type of entities ("address", "company", "government", "name", "organization", "position", "scene", "time") ### Usage from transformers import BertConfig config = BertConfig.from_pretrained("bert-base-chinese", num_labels=num_labels) model_path = "cyclone/cyclone-ner" tokenizer = CNerTokenizer.from_pretrained(model_path, do_lower_case=True) model = BertCrfForNer.from_pretrained(model_path, config=config)
{}
cyclone/cyclone-ner
null
[ "transformers", "pytorch", "bert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #endpoints_compatible #region-us
## Cyclone Chinese NER This model provides simplified Chinese NER model based on pretrained model BERT (specifically BERT + CRF) Currently, we only support 8 general type of entities ("address", "company", "government", "name", "organization", "position", "scene", "time") ### Usage from transformers import BertConfig config = BertConfig.from_pretrained("bert-base-chinese", num_labels=num_labels) model_path = "cyclone/cyclone-ner" tokenizer = CNerTokenizer.from_pretrained(model_path, do_lower_case=True) model = BertCrfForNer.from_pretrained(model_path, config=config)
[ "## Cyclone Chinese NER\r\n\r\nThis model provides simplified Chinese NER model based on pretrained model BERT (specifically BERT + CRF)\r\nCurrently, we only support 8 general type of entities (\"address\", \"company\", \"government\", \"name\", \"organization\", \"position\", \"scene\", \"time\")", "### Usage\r\n from transformers import BertConfig\r\n\r\n config = BertConfig.from_pretrained(\"bert-base-chinese\", num_labels=num_labels)\r\n\r\n model_path = \"cyclone/cyclone-ner\"\r\n\r\n tokenizer = CNerTokenizer.from_pretrained(model_path, do_lower_case=True)\r\n model = BertCrfForNer.from_pretrained(model_path, config=config)" ]
[ "TAGS\n#transformers #pytorch #bert #endpoints_compatible #region-us \n", "## Cyclone Chinese NER\r\n\r\nThis model provides simplified Chinese NER model based on pretrained model BERT (specifically BERT + CRF)\r\nCurrently, we only support 8 general type of entities (\"address\", \"company\", \"government\", \"name\", \"organization\", \"position\", \"scene\", \"time\")", "### Usage\r\n from transformers import BertConfig\r\n\r\n config = BertConfig.from_pretrained(\"bert-base-chinese\", num_labels=num_labels)\r\n\r\n model_path = \"cyclone/cyclone-ner\"\r\n\r\n tokenizer = CNerTokenizer.from_pretrained(model_path, do_lower_case=True)\r\n model = BertCrfForNer.from_pretrained(model_path, config=config)" ]
feature-extraction
transformers
## Cyclone SIMCSE RoBERTa WWM Ext Chinese This model provides simplified Chinese sentence embeddings encoding based on [Simple Contrastive Learning](https://arxiv.org/abs/2104.08821). The pretrained model(Chinese RoBERTa WWM Ext) is used for token encoding. ### Usage Please use [SentenceTransformer](https://github.com/UKPLab/sentence-transformers) to load the model. from sentence_transformers import SentenceTransformer encoder = SentenceTransformer('cyclone/simcse-chinese-roberta-wwm-ext')
{}
cyclone/simcse-chinese-roberta-wwm-ext
null
[ "transformers", "pytorch", "bert", "feature-extraction", "arxiv:2104.08821", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2104.08821" ]
[]
TAGS #transformers #pytorch #bert #feature-extraction #arxiv-2104.08821 #endpoints_compatible #has_space #region-us
## Cyclone SIMCSE RoBERTa WWM Ext Chinese This model provides simplified Chinese sentence embeddings encoding based on Simple Contrastive Learning. The pretrained model(Chinese RoBERTa WWM Ext) is used for token encoding. ### Usage Please use SentenceTransformer to load the model. from sentence_transformers import SentenceTransformer encoder = SentenceTransformer('cyclone/simcse-chinese-roberta-wwm-ext')
[ "## Cyclone SIMCSE RoBERTa WWM Ext Chinese\r\n\r\nThis model provides simplified Chinese sentence embeddings encoding based on Simple Contrastive Learning.\r\nThe pretrained model(Chinese RoBERTa WWM Ext) is used for token encoding.", "### Usage\r\nPlease use SentenceTransformer to load the model.\r\n\r\n from sentence_transformers import SentenceTransformer\r\n \r\n encoder = SentenceTransformer('cyclone/simcse-chinese-roberta-wwm-ext')" ]
[ "TAGS\n#transformers #pytorch #bert #feature-extraction #arxiv-2104.08821 #endpoints_compatible #has_space #region-us \n", "## Cyclone SIMCSE RoBERTa WWM Ext Chinese\r\n\r\nThis model provides simplified Chinese sentence embeddings encoding based on Simple Contrastive Learning.\r\nThe pretrained model(Chinese RoBERTa WWM Ext) is used for token encoding.", "### Usage\r\nPlease use SentenceTransformer to load the model.\r\n\r\n from sentence_transformers import SentenceTransformer\r\n \r\n encoder = SentenceTransformer('cyclone/simcse-chinese-roberta-wwm-ext')" ]
fill-mask
transformers
# About This is a sample repo.
{}
cylee/tutorial
null
[ "transformers", "tf", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #tf #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
# About This is a sample repo.
[ "# About\n\nThis is a sample repo." ]
[ "TAGS\n#transformers #tf #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n", "# About\n\nThis is a sample repo." ]
fill-mask
transformers
# Description: This is a smaller per-trained model on Sinhalese Language using Masked Language Modeling(MLM). And the model is trained on Oscar Sinhala dataset. # How to Use: The model can be used directly with a pipeline for masked language modeling: ```python >>> from transformers import AutoTokenizer, AutoModelForMaskedLM, pipeline >>> tokenizer = AutoTokenizer.from_pretrained("d42kw01f/Sinhala-RoBERTa") >>> model = AutoModelForMaskedLM.from_pretrained("d42kw01f/Sinhala-RoBERTa") >>> fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer) >>> fill_mask("මම ගෙදර <mask>.") [{'score': 0.1822454035282135, 'sequence': 'මම ගෙදර ආව.', 'token': 701, 'token_str': ' ආව'}, {'score': 0.10513380169868469, 'sequence': 'මම ගෙදර ය.', 'token': 310, 'token_str': ' ය'}, {'score': 0.06417194753885269, 'sequence': 'මම ගෙදර එක.', 'token': 328, 'token_str': ' එක'}, {'score': 0.05026362091302872, 'sequence': 'මම ගෙදර ඇත.', 'token': 330, 'token_str': ' ඇත'}, {'score': 0.029960114508867264, 'sequence': 'මම ගෙදර යනව.', 'token': 834, 'token_str': ' යනව'}] ```
{}
d42kw01f/Sinhala-RoBERTa
null
[ "transformers", "pytorch", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
# Description: This is a smaller per-trained model on Sinhalese Language using Masked Language Modeling(MLM). And the model is trained on Oscar Sinhala dataset. # How to Use: The model can be used directly with a pipeline for masked language modeling:
[ "# Description:\n\nThis is a smaller per-trained model on Sinhalese Language using Masked Language Modeling(MLM). And the model is trained on Oscar Sinhala dataset.", "# How to Use:\nThe model can be used directly with a pipeline for masked language modeling:" ]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n", "# Description:\n\nThis is a smaller per-trained model on Sinhalese Language using Masked Language Modeling(MLM). And the model is trained on Oscar Sinhala dataset.", "# How to Use:\nThe model can be used directly with a pipeline for masked language modeling:" ]
fill-mask
transformers
# Description: This is a smaller per-trained model on Tamil Language using Masked Language Modeling(MLM). And the model is trained on Oscar Tamil dataset. # How to Use: The model can be used directly with a pipeline for masked language modeling: ```python >>> from transformers import AutoTokenizer, AutoModelForMaskedLM, pipeline >>> tokenizer = AutoTokenizer.from_pretrained("d42kw01f/Tamil-RoBERTa") >>> model = AutoModelForMaskedLM.from_pretrained("d42kw01f/Tamil-RoBERTa") >>> fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer) >>> fill_mask("நான் வீட்டு <mask>.") ```
{}
d42kw01f/Tamil-RoBERTa
null
[ "transformers", "pytorch", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
# Description: This is a smaller per-trained model on Tamil Language using Masked Language Modeling(MLM). And the model is trained on Oscar Tamil dataset. # How to Use: The model can be used directly with a pipeline for masked language modeling:
[ "# Description:\n\nThis is a smaller per-trained model on Tamil Language using Masked Language Modeling(MLM). And the model is trained on Oscar Tamil dataset.", "# How to Use:\nThe model can be used directly with a pipeline for masked language modeling:" ]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n", "# Description:\n\nThis is a smaller per-trained model on Tamil Language using Masked Language Modeling(MLM). And the model is trained on Oscar Tamil dataset.", "# How to Use:\nThe model can be used directly with a pipeline for masked language modeling:" ]
text-classification
transformers
## About the Model An English sequence classification model, trained on MBAD Dataset to detect bias and fairness in sentences (news articles). This model was built on top of distilbert-base-uncased model and trained for 30 epochs with a batch size of 16, a learning rate of 5e-5, and a maximum sequence length of 512. - Dataset : MBAD Data - Carbon emission 0.319355 Kg | Train Accuracy | Validation Accuracy | Train loss | Test loss | |---------------:| -------------------:| ----------:|----------:| | 76.97 | 62.00 | 0.45 | 0.96 | ## Usage The easiest way is to load the inference api from huggingface and second method is through the pipeline object offered by transformers library. ```python from transformers import AutoTokenizer, TFAutoModelForSequenceClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("d4data/bias-detection-model") model = TFAutoModelForSequenceClassification.from_pretrained("d4data/bias-detection-model") classifier = pipeline('text-classification', model=model, tokenizer=tokenizer) # cuda = 0,1 based on gpu availability classifier("The irony, of course, is that the exhibit that invites people to throw trash at vacuuming Ivanka Trump lookalike reflects every stereotype feminists claim to stand against, oversexualizing Ivanka’s body and ignoring her hard work.") ``` ## Author This model is part of the Research topic "Bias and Fairness in AI" conducted by Deepak John Reji, Shaina Raza. If you use this work (code, model or dataset), please star at: > Bias & Fairness in AI, (2022), GitHub repository, <https://github.com/dreji18/Fairness-in-AI>
{"language": ["en"], "tags": ["Text Classification"], "co2_eq_emissions": 0.319355, "widget": [{"text": "Nevertheless, Trump and other Republicans have tarred the protests as havens for terrorists intent on destroying property.", "example_title": "Biased example 1"}, {"text": "Billie Eilish issues apology for mouthing an anti-Asian derogatory term in a resurfaced video.", "example_title": "Biased example 2"}, {"text": "Christians should make clear that the perpetuation of objectionable vaccines and the lack of alternatives is a kind of coercion.", "example_title": "Biased example 3"}, {"text": "There have been a protest by a group of people", "example_title": "Non-Biased example 1"}, {"text": "While emphasizing he\u2019s not singling out either party, Cohen warned about the danger of normalizing white supremacist ideology.", "example_title": "Non-Biased example 2"}]}
d4data/bias-detection-model
null
[ "transformers", "tf", "distilbert", "text-classification", "Text Classification", "en", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #tf #distilbert #text-classification #Text Classification #en #co2_eq_emissions #autotrain_compatible #endpoints_compatible #has_space #region-us
About the Model --------------- An English sequence classification model, trained on MBAD Dataset to detect bias and fairness in sentences (news articles). This model was built on top of distilbert-base-uncased model and trained for 30 epochs with a batch size of 16, a learning rate of 5e-5, and a maximum sequence length of 512. * Dataset : MBAD Data * Carbon emission 0.319355 Kg Usage ----- The easiest way is to load the inference api from huggingface and second method is through the pipeline object offered by transformers library. Author ------ This model is part of the Research topic "Bias and Fairness in AI" conducted by Deepak John Reji, Shaina Raza. If you use this work (code, model or dataset), please star at: > > Bias & Fairness in AI, (2022), GitHub repository, <URL > > >
[]
[ "TAGS\n#transformers #tf #distilbert #text-classification #Text Classification #en #co2_eq_emissions #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
token-classification
spacy
## About the Model This model is trained on MBAD Dataset to recognize the biased word/phrases in a sentence. This model was built on top of roberta-base offered by Spacy transformers. This model is in association with https://huggingface.co/d4data/bias-detection-model | Feature | Description | | --- | --- | | **Name** | `Bias Recognizer Model` | | **Version** | `1.0` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `transformer`, `ner` | | **Components** | `transformer`, `ner` | ## Author This model is part of the Research topic "Bias and Fairness in AI" conducted by Deepak John Reji, Shaina Raza. If you use this work (code, model or dataset), please star at: > Bias & Fairness in AI, (2022), GitHub repository, <https://github.com/dreji18/Fairness-in-AI>
{"language": ["en"], "tags": ["spacy", "token-classification"], "widget": [{"text": "Billie Eilish issues apology for mouthing an anti-Asian derogatory term in a resurfaced video.", "example_title": "Biased example 1"}, {"text": "Christians should make clear that the perpetuation of objectionable vaccines and the lack of alternatives is a kind of coercion.", "example_title": "Biased example 2"}, {"text": "But, whether this switch constitutes a true win for the racist right or not, it\u2019s clear that MAGA conservatives are highly attuned to how decisions are made in the White House and which positions they want to control.", "example_title": "Biased example 3"}, {"text": "The fact that the abortion rate among American blacks is far higher than the rate for whites is routinely chronicled and mourned.", "example_title": "Biased example 4"}]}
d4data/en_pipeline
null
[ "spacy", "token-classification", "en", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #spacy #token-classification #en #model-index #region-us
About the Model --------------- This model is trained on MBAD Dataset to recognize the biased word/phrases in a sentence. This model was built on top of roberta-base offered by Spacy transformers. This model is in association with URL Author ------ This model is part of the Research topic "Bias and Fairness in AI" conducted by Deepak John Reji, Shaina Raza. If you use this work (code, model or dataset), please star at: > > Bias & Fairness in AI, (2022), GitHub repository, <URL > > >
[]
[ "TAGS\n#spacy #token-classification #en #model-index #region-us \n" ]
text-classification
transformers
## About the Model An Environmental due diligence classification model, trained on customized environmental Dataset to detect contamination and remediation activities (both prevailing as well as planned) as a part of site assessment process. This model can identify the source of contamination, the extent of contamination, the types of contaminants present at the site, the flow of contaminants and their interaction with ground water, surface water and other surrounding water bodies . This model was built on top of distilbert-base-uncased model and trained for 10 epochs with a batch size of 16, a learning rate of 5e-5, and a maximum sequence length of 512. - Dataset : Open Source News data + Custom data - Carbon emission 0.1069 Kg ## Usage The easiest way is to load through the pipeline object offered by transformers library. ```python from transformers import AutoTokenizer, TFAutoModelForSequenceClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("d4data/environmental-due-diligence-model") model = TFAutoModelForSequenceClassification.from_pretrained("d4data/environmental-due-diligence-model") classifier = pipeline('text-classification', model=model, tokenizer=tokenizer) # cuda = 0,1 based on gpu availability classifier("At the every month post-injection monitoring event, TCE, carbon tetrachloride, and chloroform concentrations were above CBSGs in three of the wells") ``` ## Author This model is part of the Research topic "Environmental Due Diligence" conducted by Deepak John Reji, Afreen Aman. If you use this work (code, model or dataset), please cite as: > Environmental Due Diligence, (2020), https://www.sciencedirect.com/science/article/pii/S2665963822001117 ## You can support me here :) <a href="https://www.buymeacoffee.com/deepakjohnreji" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
{"language": ["en"], "tags": ["Text Classification"], "co2_eq_emissions": 0.1069, "widget": [{"text": "At the every month post-injection monitoring event, TCE, carbon tetrachloride, and chloroform concentrations were above CBSGs in three of the wells", "example_title": "Remediation Standards"}, {"text": "TRPH exceedances were observed in the subsurface soils immediately above the water table and there are no TRPH exceedances in surface soils.", "example_title": "Extent of Contamination"}, {"text": "weathered shale was encountered below the surface area with fluvial deposits. Sediments in the coastal plain region are found above and below the bedrock with sandstones and shales that form the basement rock", "example_title": "Geology"}]}
d4data/environmental-due-diligence-model
null
[ "transformers", "tf", "distilbert", "text-classification", "Text Classification", "en", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #tf #distilbert #text-classification #Text Classification #en #co2_eq_emissions #autotrain_compatible #endpoints_compatible #has_space #region-us
## About the Model An Environmental due diligence classification model, trained on customized environmental Dataset to detect contamination and remediation activities (both prevailing as well as planned) as a part of site assessment process. This model can identify the source of contamination, the extent of contamination, the types of contaminants present at the site, the flow of contaminants and their interaction with ground water, surface water and other surrounding water bodies . This model was built on top of distilbert-base-uncased model and trained for 10 epochs with a batch size of 16, a learning rate of 5e-5, and a maximum sequence length of 512. - Dataset : Open Source News data + Custom data - Carbon emission 0.1069 Kg ## Usage The easiest way is to load through the pipeline object offered by transformers library. ## Author This model is part of the Research topic "Environmental Due Diligence" conducted by Deepak John Reji, Afreen Aman. If you use this work (code, model or dataset), please cite as: > Environmental Due Diligence, (2020), URL ## You can support me here :) <a href="URL target="_blank"><img src="URL alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
[ "## About the Model\nAn Environmental due diligence classification model, trained on customized environmental Dataset to detect contamination and remediation activities (both prevailing as well as planned) as a part of site assessment process. This model can identify the source of contamination, the extent of contamination, the types of contaminants present at the site, the flow of contaminants and their interaction with ground water, surface water and other surrounding water bodies .\n\nThis model was built on top of distilbert-base-uncased model and trained for 10 epochs with a batch size of 16, a learning rate of 5e-5, and a maximum sequence length of 512.\n\n- Dataset : Open Source News data + Custom data\n- Carbon emission 0.1069 Kg", "## Usage\nThe easiest way is to load through the pipeline object offered by transformers library.", "## Author\nThis model is part of the Research topic \"Environmental Due Diligence\" conducted by Deepak John Reji, Afreen Aman. If you use this work (code, model or dataset), please cite as:\n> Environmental Due Diligence, (2020), URL", "## You can support me here :)\n<a href=\"URL target=\"_blank\"><img src=\"URL alt=\"Buy Me A Coffee\" style=\"height: 60px !important;width: 217px !important;\" ></a>" ]
[ "TAGS\n#transformers #tf #distilbert #text-classification #Text Classification #en #co2_eq_emissions #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "## About the Model\nAn Environmental due diligence classification model, trained on customized environmental Dataset to detect contamination and remediation activities (both prevailing as well as planned) as a part of site assessment process. This model can identify the source of contamination, the extent of contamination, the types of contaminants present at the site, the flow of contaminants and their interaction with ground water, surface water and other surrounding water bodies .\n\nThis model was built on top of distilbert-base-uncased model and trained for 10 epochs with a batch size of 16, a learning rate of 5e-5, and a maximum sequence length of 512.\n\n- Dataset : Open Source News data + Custom data\n- Carbon emission 0.1069 Kg", "## Usage\nThe easiest way is to load through the pipeline object offered by transformers library.", "## Author\nThis model is part of the Research topic \"Environmental Due Diligence\" conducted by Deepak John Reji, Afreen Aman. If you use this work (code, model or dataset), please cite as:\n> Environmental Due Diligence, (2020), URL", "## You can support me here :)\n<a href=\"URL target=\"_blank\"><img src=\"URL alt=\"Buy Me A Coffee\" style=\"height: 60px !important;width: 217px !important;\" ></a>" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.8976 - Mae: 0.4268 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.092 | 1.0 | 235 | 0.9514 | 0.5122 | | 0.9509 | 2.0 | 470 | 0.8976 | 0.4268 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-marc-en", "results": []}]}
d4niel92/xlm-roberta-base-finetuned-marc-en
null
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "dataset:amazon_reviews_multi", "base_model:xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
xlm-roberta-base-finetuned-marc-en ================================== This model is a fine-tuned version of xlm-roberta-base on the amazon\_reviews\_multi dataset. It achieves the following results on the evaluation set: * Loss: 0.8976 * Mae: 0.4268 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.9.0+cu111 * Datasets 1.14.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
text-generation
transformers
# Harry
{"tags": ["conversational"]}
d4rk/harry
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Harry
[ "# Harry" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Harry" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-zh-en-ep1-renri-zh-to-en This model is a fine-tuned version of [Helsinki-NLP/opus-mt-zh-en](https://huggingface.co/Helsinki-NLP/opus-mt-zh-en) on an unkown dataset. It achieves the following results on the evaluation set: - Loss: 2.2192 - Bleu: 18.2579 - Gen Len: 28.4817 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 2.2194 | 1.0 | 59472 | 2.2192 | 18.2579 | 28.4817 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["bleu"], "model_index": [{"name": "opus-mt-zh-en-ep1-renri-zh-to-en", "results": [{"task": {"name": "Sequence-to-sequence Language Modeling", "type": "text2text-generation"}, "metric": {"name": "Bleu", "type": "bleu", "value": 18.2579}}]}]}
dadada/opus-mt-zh-en-ep1-renri-zh-to-en
null
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #marian #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
opus-mt-zh-en-ep1-renri-zh-to-en ================================ This model is a fine-tuned version of Helsinki-NLP/opus-mt-zh-en on an unkown dataset. It achieves the following results on the evaluation set: * Loss: 2.2192 * Bleu: 18.2579 * Gen Len: 28.4817 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.9.2 * Pytorch 1.9.0+cu102 * Datasets 1.11.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #marian #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
sentence-similarity
transformers
# Similarity between two sentences (fine-tuning with KoELECTRA-Small-v3 model and KorSTS dataset) ## Usage (Amazon SageMaker inference applicable) It uses the interface of the SageMaker Inference Toolkit as is, so it can be easily deployed to SageMaker Endpoint. ### inference_korsts.py ```python import json import sys import logging import torch from torch import nn from transformers import ElectraConfig from transformers import ElectraModel, AutoTokenizer, ElectraTokenizer, ElectraForSequenceClassification logging.basicConfig( level=logging.INFO, format='[{%(filename)s:%(lineno)d} %(levelname)s - %(message)s', handlers=[ logging.FileHandler(filename='tmp.log'), logging.StreamHandler(sys.stdout) ] ) logger = logging.getLogger(__name__) max_seq_length = 128 tokenizer = AutoTokenizer.from_pretrained("daekeun-ml/koelectra-small-v3-korsts") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # Huggingface pre-trained model: 'monologg/koelectra-small-v3-discriminator' def model_fn(model_path): #### # If you have your own trained model # Huggingface pre-trained model: 'monologg/koelectra-small-v3-discriminator' #### #config = ElectraConfig.from_json_file(f'{model_path}/config.json') #model = ElectraForSequenceClassification.from_pretrained(f'{model_path}/model.pth', config=config) model = ElectraForSequenceClassification.from_pretrained('daekeun-ml/koelectra-small-v3-korsts') model.to(device) return model def input_fn(input_data, content_type="application/jsonlines"): data_str = input_data.decode("utf-8") jsonlines = data_str.split("\n") transformed_inputs = [] for jsonline in jsonlines: text = json.loads(jsonline)["text"] logger.info("input text: {}".format(text)) encode_plus_token = tokenizer.encode_plus( text, max_length=max_seq_length, add_special_tokens=True, return_token_type_ids=False, padding="max_length", return_attention_mask=True, return_tensors="pt", truncation=True, ) transformed_inputs.append(encode_plus_token) return transformed_inputs def predict_fn(transformed_inputs, model): predicted_classes = [] for data in transformed_inputs: data = data.to(device) output = model(**data) prediction_dict = {} prediction_dict['score'] = output[0].squeeze().cpu().detach().numpy().tolist() jsonline = json.dumps(prediction_dict) logger.info("jsonline: {}".format(jsonline)) predicted_classes.append(jsonline) predicted_classes_jsonlines = "\n".join(predicted_classes) return predicted_classes_jsonlines def output_fn(outputs, accept="application/jsonlines"): return outputs, accept ``` ### test.py ```python >>> from inference_korsts import model_fn, input_fn, predict_fn, output_fn >>> with open('./samples/korsts.txt', mode='rb') as file: >>> model_input_data = file.read() >>> model = model_fn() >>> transformed_inputs = input_fn(model_input_data) >>> predicted_classes_jsonlines = predict_fn(transformed_inputs, model) >>> model_outputs = output_fn(predicted_classes_jsonlines) >>> print(model_outputs[0]) [{inference_korsts.py:44} INFO - input text: ['맛있는 라면을 먹고 싶어요', '후루룩 쩝쩝 후루룩 쩝쩝 맛좋은 라면'] [{inference_korsts.py:44} INFO - input text: ['뽀로로는 내친구', '머신러닝은 러닝머신이 아닙니다.'] [{inference_korsts.py:71} INFO - jsonline: {"score": 4.786738872528076} [{inference_korsts.py:71} INFO - jsonline: {"score": 0.2319069355726242} {"score": 4.786738872528076} {"score": 0.2319069355726242} ``` ### Sample data (samples/korsts.txt) ``` {"text": ["맛있는 라면을 먹고 싶어요", "후루룩 쩝쩝 후루룩 쩝쩝 맛좋은 라면"]} {"text": ["뽀로로는 내친구", "머신러닝은 러닝머신이 아닙니다."]} ``` ## References - KoELECTRA: https://github.com/monologg/KoELECTRA - KorNLI and KorSTS Dataset: https://github.com/kakaobrain/KorNLUDatasets
{"language": ["ko"], "license": "cc-by-4.0", "tags": ["sentence-similarity", "transformers"], "datasets": ["korsts"], "metrics": ["accuracy", "f1", "precision", "recall"], "pipeline_tag": "sentence-similarity"}
daekeun-ml/koelectra-small-v3-korsts
null
[ "transformers", "pytorch", "electra", "text-classification", "sentence-similarity", "ko", "dataset:korsts", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ko" ]
TAGS #transformers #pytorch #electra #text-classification #sentence-similarity #ko #dataset-korsts #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us
# Similarity between two sentences (fine-tuning with KoELECTRA-Small-v3 model and KorSTS dataset) ## Usage (Amazon SageMaker inference applicable) It uses the interface of the SageMaker Inference Toolkit as is, so it can be easily deployed to SageMaker Endpoint. ### inference_korsts.py ### URL ### Sample data (samples/URL) ## References - KoELECTRA: URL - KorNLI and KorSTS Dataset: URL
[ "# Similarity between two sentences (fine-tuning with KoELECTRA-Small-v3 model and KorSTS dataset)", "## Usage (Amazon SageMaker inference applicable)\nIt uses the interface of the SageMaker Inference Toolkit as is, so it can be easily deployed to SageMaker Endpoint.", "### inference_korsts.py", "### URL", "### Sample data (samples/URL)", "## References\n- KoELECTRA: URL\n- KorNLI and KorSTS Dataset: URL" ]
[ "TAGS\n#transformers #pytorch #electra #text-classification #sentence-similarity #ko #dataset-korsts #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Similarity between two sentences (fine-tuning with KoELECTRA-Small-v3 model and KorSTS dataset)", "## Usage (Amazon SageMaker inference applicable)\nIt uses the interface of the SageMaker Inference Toolkit as is, so it can be easily deployed to SageMaker Endpoint.", "### inference_korsts.py", "### URL", "### Sample data (samples/URL)", "## References\n- KoELECTRA: URL\n- KorNLI and KorSTS Dataset: URL" ]
text-classification
transformers
# Sentiment Binary Classification (fine-tuning with KoELECTRA-Small-v3 model and Naver Sentiment Movie Corpus dataset) ## Usage (Amazon SageMaker inference applicable) It uses the interface of the SageMaker Inference Toolkit as is, so it can be easily deployed to SageMaker Endpoint. ### inference_nsmc.py ```python import json import sys import logging import torch from torch import nn from transformers import ElectraConfig from transformers import ElectraModel, AutoTokenizer, ElectraTokenizer, ElectraForSequenceClassification logging.basicConfig( level=logging.INFO, format='[{%(filename)s:%(lineno)d} %(levelname)s - %(message)s', handlers=[ logging.FileHandler(filename='tmp.log'), logging.StreamHandler(sys.stdout) ] ) logger = logging.getLogger(__name__) max_seq_length = 128 classes = ['Neg', 'Pos'] tokenizer = AutoTokenizer.from_pretrained("daekeun-ml/koelectra-small-v3-nsmc") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") def model_fn(model_path=None): #### # If you have your own trained model # Huggingface pre-trained model: 'monologg/koelectra-small-v3-discriminator' #### #config = ElectraConfig.from_json_file(f'{model_path}/config.json') #model = ElectraForSequenceClassification.from_pretrained(f'{model_path}/model.pth', config=config) # Download model from the Huggingface hub model = ElectraForSequenceClassification.from_pretrained('daekeun-ml/koelectra-small-v3-nsmc') model.to(device) return model def input_fn(input_data, content_type="application/jsonlines"): data_str = input_data.decode("utf-8") jsonlines = data_str.split("\n") transformed_inputs = [] for jsonline in jsonlines: text = json.loads(jsonline)["text"][0] logger.info("input text: {}".format(text)) encode_plus_token = tokenizer.encode_plus( text, max_length=max_seq_length, add_special_tokens=True, return_token_type_ids=False, padding="max_length", return_attention_mask=True, return_tensors="pt", truncation=True, ) transformed_inputs.append(encode_plus_token) return transformed_inputs def predict_fn(transformed_inputs, model): predicted_classes = [] for data in transformed_inputs: data = data.to(device) output = model(**data) softmax_fn = nn.Softmax(dim=1) softmax_output = softmax_fn(output[0]) _, prediction = torch.max(softmax_output, dim=1) predicted_class_idx = prediction.item() predicted_class = classes[predicted_class_idx] score = softmax_output[0][predicted_class_idx] logger.info("predicted_class: {}".format(predicted_class)) prediction_dict = {} prediction_dict["predicted_label"] = predicted_class prediction_dict['score'] = score.cpu().detach().numpy().tolist() jsonline = json.dumps(prediction_dict) logger.info("jsonline: {}".format(jsonline)) predicted_classes.append(jsonline) predicted_classes_jsonlines = "\n".join(predicted_classes) return predicted_classes_jsonlines def output_fn(outputs, accept="application/jsonlines"): return outputs, accept ``` ### test.py ```python >>> from inference_nsmc import model_fn, input_fn, predict_fn, output_fn >>> with open('samples/nsmc.txt', mode='rb') as file: >>> model_input_data = file.read() >>> model = model_fn() >>> transformed_inputs = input_fn(model_input_data) >>> predicted_classes_jsonlines = predict_fn(transformed_inputs, model) >>> model_outputs = output_fn(predicted_classes_jsonlines) >>> print(model_outputs[0]) [{inference_nsmc.py:47} INFO - input text: 이 영화는 최고의 영화입니다 [{inference_nsmc.py:47} INFO - input text: 최악이에요. 배우의 연기력도 좋지 않고 내용도 너무 허접합니다 [{inference_nsmc.py:77} INFO - predicted_class: Pos [{inference_nsmc.py:84} INFO - jsonline: {"predicted_label": "Pos", "score": 0.9619030952453613} [{inference_nsmc.py:77} INFO - predicted_class: Neg [{inference_nsmc.py:84} INFO - jsonline: {"predicted_label": "Neg", "score": 0.9994170665740967} {"predicted_label": "Pos", "score": 0.9619030952453613} {"predicted_label": "Neg", "score": 0.9994170665740967} ``` ### Sample data (samples/nsmc.txt) ``` {"text": ["이 영화는 최고의 영화입니다"]} {"text": ["최악이에요. 배우의 연기력도 좋지 않고 내용도 너무 허접합니다"]} ``` ## References - KoELECTRA: https://github.com/monologg/KoELECTRA - Naver Sentiment Movie Corpus Dataset: https://github.com/e9t/nsmc
{"language": ["ko"], "license": "mit", "tags": ["classification"], "datasets": ["nsmc"], "metrics": ["accuracy", "f1", "precision", "recall- accuracy"], "widget": [{"text": "\ubd88\ud6c4\uc758 \uba85\uc791\uc785\ub2c8\ub2e4! \uc774\ub807\uac8c \uac10\ub3d9\uc801\uc778 \ub0b4\uc6a9\uc740 \ucc98\uc74c\uc774\uc5d0\uc694", "example_title": "Positive"}, {"text": "\uc2dc\uac04\uc774 \uc815\ub9d0 \uc544\uae5d\uc2b5\ub2c8\ub2e4. 10\uc810 \ub9cc\uc810\uc5d0 1\uc810\ub3c4 \uc544\uae4c\uc6cc\uc694..", "example_title": "Negative"}]}
daekeun-ml/koelectra-small-v3-nsmc
null
[ "transformers", "pytorch", "electra", "text-classification", "classification", "ko", "dataset:nsmc", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ko" ]
TAGS #transformers #pytorch #electra #text-classification #classification #ko #dataset-nsmc #license-mit #autotrain_compatible #endpoints_compatible #region-us
# Sentiment Binary Classification (fine-tuning with KoELECTRA-Small-v3 model and Naver Sentiment Movie Corpus dataset) ## Usage (Amazon SageMaker inference applicable) It uses the interface of the SageMaker Inference Toolkit as is, so it can be easily deployed to SageMaker Endpoint. ### inference_nsmc.py ### URL ### Sample data (samples/URL) ## References - KoELECTRA: URL - Naver Sentiment Movie Corpus Dataset: URL
[ "# Sentiment Binary Classification (fine-tuning with KoELECTRA-Small-v3 model and Naver Sentiment Movie Corpus dataset)", "## Usage (Amazon SageMaker inference applicable)\nIt uses the interface of the SageMaker Inference Toolkit as is, so it can be easily deployed to SageMaker Endpoint.", "### inference_nsmc.py", "### URL", "### Sample data (samples/URL)", "## References\n- KoELECTRA: URL\n- Naver Sentiment Movie Corpus Dataset: URL" ]
[ "TAGS\n#transformers #pytorch #electra #text-classification #classification #ko #dataset-nsmc #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# Sentiment Binary Classification (fine-tuning with KoELECTRA-Small-v3 model and Naver Sentiment Movie Corpus dataset)", "## Usage (Amazon SageMaker inference applicable)\nIt uses the interface of the SageMaker Inference Toolkit as is, so it can be easily deployed to SageMaker Endpoint.", "### inference_nsmc.py", "### URL", "### Sample data (samples/URL)", "## References\n- KoELECTRA: URL\n- Naver Sentiment Movie Corpus Dataset: URL" ]
text-to-image
transformers
# DALL·E Mini Model Card This model card focuses on the model associated with the DALL·E mini space on Hugging Face, available [here](https://huggingface.co/spaces/dalle-mini/dalle-mini). The app is called “dalle-mini”, but incorporates “[DALL·E Mini](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-mini-Generate-images-from-any-text-prompt--VmlldzoyMDE4NDAy)’’ and “[DALL·E Mega](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-Mega-Training-Journal--VmlldzoxODMxMDI2)” models (further details on this distinction forthcoming). The DALL·E Mega model is the largest version of DALLE Mini. For more information specific to DALL·E Mega, see the [DALL·E Mega model card](https://huggingface.co/dalle-mini/dalle-mega). ## Model Details * **Developed by:** Boris Dayma, Suraj Patil, Pedro Cuenca, Khalid Saifullah, Tanishq Abraham, Phúc Lê, Luke, Luke Melas, Ritobrata Ghosh * **Model type:** Transformer-based text-to-image generation model * **Language(s):** English * **License:** Apache 2.0 * **Model Description:** This is a model that can be used to generate images based on text prompts. As the model developers wrote in the [project report](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-mini-Generate-images-from-any-text-prompt--VmlldzoyMDE4NDAy) about DALL·E mini, “OpenAI had the first impressive model for generating images with [DALL·E](https://openai.com/blog/dall-e/). DALL·E mini is an attempt at reproducing those results with an open-source model.” * **Resources for more information:** See OpenAI’s website for more information about [DALL·E](https://openai.com/blog/dall-e/), including the [DALL·E model card](https://github.com/openai/DALL-E/blob/master/model_card.md). See the [project report](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-mini-Generate-images-from-any-text-prompt--VmlldzoyMDE4NDAy) for more information from the model’s developers. To learn more about DALL·E Mega, see the DALL·E Mega [training journal](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-Mega-Training--VmlldzoxODMxMDI2#training-parameters). * **Cite as:** ```bib text @misc{Dayma_DALL·E_Mini_2021, author = {Dayma, Boris and Patil, Suraj and Cuenca, Pedro and Saifullah, Khalid and Abraham, Tanishq and Lê Khắc, Phúc and Melas, Luke and Ghosh, Ritobrata}, doi = {10.5281/zenodo.5146400}, month = {7}, title = {DALL·E Mini}, url = {https://github.com/borisdayma/dalle-mini}, year = {2021} } ``` ## Uses ### Direct Use The model is intended to be used to generate images based on text prompts for research and personal consumption. Intended uses include supporting creativity, creating humorous content, and providing generations for people curious about the model’s behavior. Intended uses exclude those described in the [Misuse and Out-of-Scope Use](#misuse-malicious-use-and-out-of-scope-use) section. ### Downstream Use The model could also be used for downstream use cases, including: * Research efforts, such as probing and better understanding the limitations and biases of generative models to further improve the state of science * Development of educational or creative tools * Generation of artwork and use in design and artistic processes. * Other uses that are newly discovered by users. This currently includes poetry illustration (give a poem as prompt), fan art (putting a character in various other visual universes), visual puns, fairy tale illustrations (give a fantasy situation as prompt), concept mashups (applying a texture to something completely different), style transfers (portraits in the style of), … We hope you will find your own application! Downstream uses exclude the uses described in [Misuse and Out-of-Scope Use](#misuse-malicious-use-and-out-of-scope-use). ### Misuse, Malicious Use, and Out-of-Scope Use The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes: * Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. * Intentionally promoting or propagating discriminatory content or harmful stereotypes. * Impersonating individuals without their consent. * Sexual content without consent of the people who might see it. * Mis- and disinformation * Representations of egregious violence and gore * Sharing of copyrighted or licensed material in violation of its terms of use. * Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations The model developers discuss the limitations of the model further in the DALL·E Mini [technical report](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-Mini-Explained-with-Demo--Vmlldzo4NjIxODA): * Faces and people in general are not generated properly. * Animals are usually unrealistic. * It is hard to predict where the model excels or falls short…Good prompt engineering will lead to the best results. * The model has only been trained with English descriptions and will not perform as well in other languages ### Bias **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.** The model was trained on unfiltered data from the Internet, limited to pictures with English descriptions. Text and images from communities and cultures using other languages were not utilized. This affects all output of the model, with white and Western culture asserted as a default, and the model’s ability to generate content using non-English prompts is observably lower quality than prompts in English. While the capabilities of image generation models are impressive, they may also reinforce or exacerbate societal biases. The extent and nature of the biases of DALL·E Mini and DALL·E Mega models have yet to be fully documented, but initial testing demonstrates that they may generate images that contain negative stereotypes against minoritized groups. Work to analyze the nature and extent of the models’ biases and limitations is ongoing. Our current analyses demonstrate that: * Images generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. * When the model generates images with people in them, it tends to output people who we perceive to be white, while people of color are underrepresented. * Images generated by the model can contain biased content that depicts power differentials between people of color and people who are white, with white people in positions of privilege. * The model is generally only usable for generating images based on text in English, limiting accessibility of the model for non-English speakers and potentially contributing to the biases in images generated by the model. The [technical report](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-Mini-Explained-with-Demo--Vmlldzo4NjIxODA) discusses these issues in more detail, and also highlights potential sources of bias in the model development process. ### Limitations and Bias Recommendations * Users (both direct and downstream) should be made aware of the biases and limitations. * Content that is potentially problematic should be filtered out, e.g., via automated models that detect violence or pornography. * Further work on this model should include methods for balanced and just representations of people and cultures, for example, by curating the training dataset to be both diverse and inclusive. ## Training ### Training Data The model developers used 3 datasets for the model: * [Conceptual Captions Dataset](https://aclanthology.org/P18-1238/), which contains 3 million image and caption pairs. * [Conceptual 12M](https://arxiv.org/abs/2102.08981), which contains 12 million image and caption pairs. * The [OpenAI subset](https://github.com/openai/CLIP/blob/main/data/yfcc100m.md) of [YFCC100M](https://multimediacommons.wordpress.com/yfcc100m-core-dataset/), which contains about 15 million images and that we further sub-sampled to 2 million images due to limitations in storage space. They used both title and description as caption and removed html tags, new lines and extra spaces. For fine-tuning the image encoder, a subset of 2 million images were used. All images (about 15 million) were used for training the Seq2Seq model. ### Training Procedure As described further in the [technical report](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-Mini-Explained-with-Demo--Vmlldzo4NjIxODA#our-dall-e-model-architecture) for DALL·E Mini, during training, images and descriptions are both available and pass through the system as follows: * Images are encoded through a [VQGAN](https://arxiv.org/abs/2012.09841) encoder, which turns images into a sequence of tokens. * Descriptions are encoded through a [BART](https://arxiv.org/abs/1910.13461) encoder. * The output of the BART encoder and encoded images are fed through the BART decoder, which is an auto-regressive model whose goal is to predict the next token. * Loss is the [softmax cross-entropy](https://wandb.ai/sauravm/Activation-Functions/reports/Activation-Functions-Softmax--VmlldzoxNDU1Njgy#%F0%9F%93%A2-softmax-+-cross-entropy-loss-(caution:-math-alert)) between the model prediction logits and the actual image encodings from the VQGAN. The simplified training procedure for DALL·E Mega is as follows: * **Hardware:** 1 pod TPU v3-256 = 32 nodes of TPU VM v3-8 (8 TPU per node) = 256 TPU v3 * **Optimizer:** Distributed Shampoo * **Model Partition Specificiations:** 8 model parallel x 32 data parallel * **Batch:** 44 samples per model x 32 data parallel x 3 gradient accumulation steps = 4224 increasing samples per update * **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant until plateau * Gradient checkpointing used on each Encoder/Decoder layer (ie, MHA + FFN) * Distributed Shampoo + Normformer Optimizations have proved to be effective and efficiently scaling this model. * It should also be noted that the learning rate and other parameters are sometimes adjusted on the fly, and batch size increased over time as well. There is more information about the full procedure and technical material in the DALL·E Mega [training journal](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-Mega-Training--VmlldzoxODMxMDI2#training-parameters). ## Evaluation Results The model developers discuss their results extensively in their [technical report](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-Mini-Explained-with-Demo--Vmlldzo4NjIxODA#the-results-of-our-dall-e-experiment) for DALL·E Mini, which provides comparisons between DALL·E Mini’s results with [DALL·E-pytorch](https://github.com/lucidrains/DALLE-pytorch), OpenAI’s [DALL·E](https://openai.com/blog/dall-e/), and models consisting of a generator coupled with the [CLIP neural network model](https://openai.com/blog/clip/). For evaluation results related to DALL·E Mega, see this [technical report](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-mini-Generate-images-from-any-text-prompt--VmlldzoyMDE4NDAy). ## Environmental Impact ### DALL·E Mini Estimated Emissions *The model is 27 times smaller than the original DALL·E and was trained on a single TPU v3-8 for only 3 days.* Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. * **Hardware Type:** TPU v3-8 * **Hours used:** 72 (3 days) * **Cloud Provider:** GCP (as mentioned in the technical report) * **Compute Region:** us-east1 (provided by model developers) * **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 30.16 kg CO2 eq. ### DALL·E Mega Estimated Emissions DALL·E Mega is still training. So far, as on June 9, 2022, the model developers report that DALL·E Mega has been training for about 40-45 days on a TPU v3-256. Using those numbers, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. * **Hardware Type:** TPU v3-256 * **Hours used:** 960 - 1080 hours (40-45 days) * **Cloud Provider:** Unknown * **Compute Region:** Unknown * **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** Unknown ## Citation ```bibtext @misc{Dayma_DALL·E_Mini_2021, author = {Dayma, Boris and Patil, Suraj and Cuenca, Pedro and Saifullah, Khalid and Abraham, Tanishq and Lê Khắc, Phúc and Melas, Luke and Ghosh, Ritobrata}, doi = {10.5281/zenodo.5146400}, month = {7}, title = {DALL·E Mini}, url = {https://github.com/borisdayma/dalle-mini}, year = {2021} } ``` *This model card was written by: Boris Dayma, Margaret Mitchell, Ezi Ozoani, Marissa Gerchick, Irene Solaiman, Clémentine Fourrier, Sasha Luccioni, Emily Witko, Nazneen Rajani, and Julian Herrera.*
{"language": "en", "license": "apache-2.0", "tags": ["text-to-image"], "inference": false, "co2_eq_emissions": {"emissions": 7540, "source": "MLCo2 Machine Learning Impact calculator", "geographical_location": "East USA", "hardware_used": "TPU v3-8"}, "model-index": [{"name": "dalle-mini", "results": []}]}
dalle-mini/dalle-mini
null
[ "transformers", "jax", "dallebart", "text-to-image", "en", "arxiv:2102.08981", "arxiv:2012.09841", "arxiv:1910.13461", "arxiv:1910.09700", "license:apache-2.0", "co2_eq_emissions", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2102.08981", "2012.09841", "1910.13461", "1910.09700" ]
[ "en" ]
TAGS #transformers #jax #dallebart #text-to-image #en #arxiv-2102.08981 #arxiv-2012.09841 #arxiv-1910.13461 #arxiv-1910.09700 #license-apache-2.0 #co2_eq_emissions #has_space #region-us
# DALL·E Mini Model Card This model card focuses on the model associated with the DALL·E mini space on Hugging Face, available here. The app is called “dalle-mini”, but incorporates “DALL·E Mini’’ and “DALL·E Mega” models (further details on this distinction forthcoming). The DALL·E Mega model is the largest version of DALLE Mini. For more information specific to DALL·E Mega, see the DALL·E Mega model card. ## Model Details * Developed by: Boris Dayma, Suraj Patil, Pedro Cuenca, Khalid Saifullah, Tanishq Abraham, Phúc Lê, Luke, Luke Melas, Ritobrata Ghosh * Model type: Transformer-based text-to-image generation model * Language(s): English * License: Apache 2.0 * Model Description: This is a model that can be used to generate images based on text prompts. As the model developers wrote in the project report about DALL·E mini, “OpenAI had the first impressive model for generating images with DALL·E. DALL·E mini is an attempt at reproducing those results with an open-source model.” * Resources for more information: See OpenAI’s website for more information about DALL·E, including the DALL·E model card. See the project report for more information from the model’s developers. To learn more about DALL·E Mega, see the DALL·E Mega training journal. * Cite as: ## Uses ### Direct Use The model is intended to be used to generate images based on text prompts for research and personal consumption. Intended uses include supporting creativity, creating humorous content, and providing generations for people curious about the model’s behavior. Intended uses exclude those described in the Misuse and Out-of-Scope Use section. ### Downstream Use The model could also be used for downstream use cases, including: * Research efforts, such as probing and better understanding the limitations and biases of generative models to further improve the state of science * Development of educational or creative tools * Generation of artwork and use in design and artistic processes. * Other uses that are newly discovered by users. This currently includes poetry illustration (give a poem as prompt), fan art (putting a character in various other visual universes), visual puns, fairy tale illustrations (give a fantasy situation as prompt), concept mashups (applying a texture to something completely different), style transfers (portraits in the style of), … We hope you will find your own application! Downstream uses exclude the uses described in Misuse and Out-of-Scope Use. ### Misuse, Malicious Use, and Out-of-Scope Use The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes: * Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. * Intentionally promoting or propagating discriminatory content or harmful stereotypes. * Impersonating individuals without their consent. * Sexual content without consent of the people who might see it. * Mis- and disinformation * Representations of egregious violence and gore * Sharing of copyrighted or licensed material in violation of its terms of use. * Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations The model developers discuss the limitations of the model further in the DALL·E Mini technical report: * Faces and people in general are not generated properly. * Animals are usually unrealistic. * It is hard to predict where the model excels or falls short…Good prompt engineering will lead to the best results. * The model has only been trained with English descriptions and will not perform as well in other languages ### Bias CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes. The model was trained on unfiltered data from the Internet, limited to pictures with English descriptions. Text and images from communities and cultures using other languages were not utilized. This affects all output of the model, with white and Western culture asserted as a default, and the model’s ability to generate content using non-English prompts is observably lower quality than prompts in English. While the capabilities of image generation models are impressive, they may also reinforce or exacerbate societal biases. The extent and nature of the biases of DALL·E Mini and DALL·E Mega models have yet to be fully documented, but initial testing demonstrates that they may generate images that contain negative stereotypes against minoritized groups. Work to analyze the nature and extent of the models’ biases and limitations is ongoing. Our current analyses demonstrate that: * Images generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. * When the model generates images with people in them, it tends to output people who we perceive to be white, while people of color are underrepresented. * Images generated by the model can contain biased content that depicts power differentials between people of color and people who are white, with white people in positions of privilege. * The model is generally only usable for generating images based on text in English, limiting accessibility of the model for non-English speakers and potentially contributing to the biases in images generated by the model. The technical report discusses these issues in more detail, and also highlights potential sources of bias in the model development process. ### Limitations and Bias Recommendations * Users (both direct and downstream) should be made aware of the biases and limitations. * Content that is potentially problematic should be filtered out, e.g., via automated models that detect violence or pornography. * Further work on this model should include methods for balanced and just representations of people and cultures, for example, by curating the training dataset to be both diverse and inclusive. ## Training ### Training Data The model developers used 3 datasets for the model: * Conceptual Captions Dataset, which contains 3 million image and caption pairs. * Conceptual 12M, which contains 12 million image and caption pairs. * The OpenAI subset of YFCC100M, which contains about 15 million images and that we further sub-sampled to 2 million images due to limitations in storage space. They used both title and description as caption and removed html tags, new lines and extra spaces. For fine-tuning the image encoder, a subset of 2 million images were used. All images (about 15 million) were used for training the Seq2Seq model. ### Training Procedure As described further in the technical report for DALL·E Mini, during training, images and descriptions are both available and pass through the system as follows: * Images are encoded through a VQGAN encoder, which turns images into a sequence of tokens. * Descriptions are encoded through a BART encoder. * The output of the BART encoder and encoded images are fed through the BART decoder, which is an auto-regressive model whose goal is to predict the next token. * Loss is the softmax cross-entropy) between the model prediction logits and the actual image encodings from the VQGAN. The simplified training procedure for DALL·E Mega is as follows: * Hardware: 1 pod TPU v3-256 = 32 nodes of TPU VM v3-8 (8 TPU per node) = 256 TPU v3 * Optimizer: Distributed Shampoo * Model Partition Specificiations: 8 model parallel x 32 data parallel * Batch: 44 samples per model x 32 data parallel x 3 gradient accumulation steps = 4224 increasing samples per update * Learning rate: warmup to 0.0001 for 10,000 steps and then kept constant until plateau * Gradient checkpointing used on each Encoder/Decoder layer (ie, MHA + FFN) * Distributed Shampoo + Normformer Optimizations have proved to be effective and efficiently scaling this model. * It should also be noted that the learning rate and other parameters are sometimes adjusted on the fly, and batch size increased over time as well. There is more information about the full procedure and technical material in the DALL·E Mega training journal. ## Evaluation Results The model developers discuss their results extensively in their technical report for DALL·E Mini, which provides comparisons between DALL·E Mini’s results with DALL·E-pytorch, OpenAI’s DALL·E, and models consisting of a generator coupled with the CLIP neural network model. For evaluation results related to DALL·E Mega, see this technical report. ## Environmental Impact ### DALL·E Mini Estimated Emissions *The model is 27 times smaller than the original DALL·E and was trained on a single TPU v3-8 for only 3 days.* Based on that information, we estimate the following CO2 emissions using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. * Hardware Type: TPU v3-8 * Hours used: 72 (3 days) * Cloud Provider: GCP (as mentioned in the technical report) * Compute Region: us-east1 (provided by model developers) * Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid): 30.16 kg CO2 eq. ### DALL·E Mega Estimated Emissions DALL·E Mega is still training. So far, as on June 9, 2022, the model developers report that DALL·E Mega has been training for about 40-45 days on a TPU v3-256. Using those numbers, we estimate the following CO2 emissions using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. * Hardware Type: TPU v3-256 * Hours used: 960 - 1080 hours (40-45 days) * Cloud Provider: Unknown * Compute Region: Unknown * Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid): Unknown *This model card was written by: Boris Dayma, Margaret Mitchell, Ezi Ozoani, Marissa Gerchick, Irene Solaiman, Clémentine Fourrier, Sasha Luccioni, Emily Witko, Nazneen Rajani, and Julian Herrera.*
[ "# DALL·E Mini Model Card\n\nThis model card focuses on the model associated with the DALL·E mini space on Hugging Face, available here. The app is called “dalle-mini”, but incorporates “DALL·E Mini’’ and “DALL·E Mega” models (further details on this distinction forthcoming).\n\nThe DALL·E Mega model is the largest version of DALLE Mini. For more information specific to DALL·E Mega, see the DALL·E Mega model card.", "## Model Details\n\n* Developed by: Boris Dayma, Suraj Patil, Pedro Cuenca, Khalid Saifullah, Tanishq Abraham, Phúc Lê, Luke, Luke Melas, Ritobrata Ghosh\n* Model type: Transformer-based text-to-image generation model\n* Language(s): English\n* License: Apache 2.0\n* Model Description: This is a model that can be used to generate images based on text prompts. As the model developers wrote in the project report about DALL·E mini, “OpenAI had the first impressive model for generating images with DALL·E. DALL·E mini is an attempt at reproducing those results with an open-source model.”\n* Resources for more information: See OpenAI’s website for more information about DALL·E, including the DALL·E model card. See the project report for more information from the model’s developers. To learn more about DALL·E Mega, see the DALL·E Mega training journal.\n* Cite as:", "## Uses", "### Direct Use\n\nThe model is intended to be used to generate images based on text prompts for research and personal consumption. Intended uses include supporting creativity, creating humorous content, and providing generations for people curious about the model’s behavior. Intended uses exclude those described in the Misuse and Out-of-Scope Use section.", "### Downstream Use\n\nThe model could also be used for downstream use cases, including:\n* Research efforts, such as probing and better understanding the limitations and biases of generative models to further improve the state of science\n* Development of educational or creative tools\n* Generation of artwork and use in design and artistic processes. \n* Other uses that are newly discovered by users. This currently includes poetry illustration (give a poem as prompt), fan art (putting a character in various other visual universes), visual puns, fairy tale illustrations (give a fantasy situation as prompt), concept mashups (applying a texture to something completely different), style transfers (portraits in the style of), … We hope you will find your own application!\n\nDownstream uses exclude the uses described in Misuse and Out-of-Scope Use.", "### Misuse, Malicious Use, and Out-of-Scope Use\n\nThe model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.", "#### Out-of-Scope Use\n\nThe model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.", "#### Misuse and Malicious Use \n\nUsing the model to generate content that is cruel to individuals is a misuse of this model. This includes:\n* Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.\n* Intentionally promoting or propagating discriminatory content or harmful stereotypes.\n* Impersonating individuals without their consent.\n* Sexual content without consent of the people who might see it.\n* Mis- and disinformation\n* Representations of egregious violence and gore\n* Sharing of copyrighted or licensed material in violation of its terms of use.\n* Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.", "## Limitations and Bias", "### Limitations\n\nThe model developers discuss the limitations of the model further in the DALL·E Mini technical report:\n* Faces and people in general are not generated properly.\n* Animals are usually unrealistic.\n* It is hard to predict where the model excels or falls short…Good prompt engineering will lead to the best results.\n* The model has only been trained with English descriptions and will not perform as well in other languages", "### Bias \n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\nThe model was trained on unfiltered data from the Internet, limited to pictures with English descriptions. Text and images from communities and cultures using other languages were not utilized. This affects all output of the model, with white and Western culture asserted as a default, and the model’s ability to generate content using non-English prompts is observably lower quality than prompts in English.\n\nWhile the capabilities of image generation models are impressive, they may also reinforce or exacerbate societal biases. The extent and nature of the biases of DALL·E Mini and DALL·E Mega models have yet to be fully documented, but initial testing demonstrates that they may generate images that contain negative stereotypes against minoritized groups. Work to analyze the nature and extent of the models’ biases and limitations is ongoing.\n\nOur current analyses demonstrate that:\n* Images generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.\n* When the model generates images with people in them, it tends to output people who we perceive to be white, while people of color are underrepresented. \n* Images generated by the model can contain biased content that depicts power differentials between people of color and people who are white, with white people in positions of privilege.\n* The model is generally only usable for generating images based on text in English, limiting accessibility of the model for non-English speakers and potentially contributing to the biases in images generated by the model.\n\nThe technical report discusses these issues in more detail, and also highlights potential sources of bias in the model development process.", "### Limitations and Bias Recommendations\n\n* Users (both direct and downstream) should be made aware of the biases and limitations.\n* Content that is potentially problematic should be filtered out, e.g., via automated models that detect violence or pornography.\n* Further work on this model should include methods for balanced and just representations of people and cultures, for example, by curating the training dataset to be both diverse and inclusive.", "## Training", "### Training Data\n\nThe model developers used 3 datasets for the model:\n* Conceptual Captions Dataset, which contains 3 million image and caption pairs.\n* Conceptual 12M, which contains 12 million image and caption pairs.\n* The OpenAI subset of YFCC100M, which contains about 15 million images and that we further sub-sampled to 2 million images due to limitations in storage space. They used both title and description as caption and removed html tags, new lines and extra spaces.\n\nFor fine-tuning the image encoder, a subset of 2 million images were used.\nAll images (about 15 million) were used for training the Seq2Seq model.", "### Training Procedure\n\nAs described further in the technical report for DALL·E Mini, during training, images and descriptions are both available and pass through the system as follows:\n* Images are encoded through a VQGAN encoder, which turns images into a sequence of tokens.\n* Descriptions are encoded through a BART encoder.\n* The output of the BART encoder and encoded images are fed through the BART decoder, which is an auto-regressive model whose goal is to predict the next token.\n* Loss is the softmax cross-entropy) between the model prediction logits and the actual image encodings from the VQGAN.\n\nThe simplified training procedure for DALL·E Mega is as follows: \n\n* Hardware: 1 pod TPU v3-256 = 32 nodes of TPU VM v3-8 (8 TPU per node) = 256 TPU v3\n* Optimizer: Distributed Shampoo\n* Model Partition Specificiations: 8 model parallel x 32 data parallel\n* Batch: 44 samples per model x 32 data parallel x 3 gradient accumulation steps = 4224 increasing samples per update\n* Learning rate: warmup to 0.0001 for 10,000 steps and then kept constant until plateau\n* Gradient checkpointing used on each Encoder/Decoder layer (ie, MHA + FFN)\n* Distributed Shampoo + Normformer Optimizations have proved to be effective and efficiently scaling this model. \n* It should also be noted that the learning rate and other parameters are sometimes adjusted on the fly, and batch size increased over time as well.\n\nThere is more information about the full procedure and technical material in the DALL·E Mega training journal.", "## Evaluation Results\n\nThe model developers discuss their results extensively in their technical report for DALL·E Mini, which provides comparisons between DALL·E Mini’s results with DALL·E-pytorch, OpenAI’s DALL·E, and models consisting of a generator coupled with the CLIP neural network model. \n\nFor evaluation results related to DALL·E Mega, see this technical report.", "## Environmental Impact", "### DALL·E Mini Estimated Emissions\n\n*The model is 27 times smaller than the original DALL·E and was trained on a single TPU v3-8 for only 3 days.*\n\nBased on that information, we estimate the following CO2 emissions using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.\n\n* Hardware Type: TPU v3-8\n* Hours used: 72 (3 days)\n* Cloud Provider: GCP (as mentioned in the technical report)\n* Compute Region: us-east1 (provided by model developers)\n* Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid): 30.16 kg CO2 eq.", "### DALL·E Mega Estimated Emissions\n\nDALL·E Mega is still training. So far, as on June 9, 2022, the model developers report that DALL·E Mega has been training for about 40-45 days on a TPU v3-256. Using those numbers, we estimate the following CO2 emissions using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.\n\n* Hardware Type: TPU v3-256\n* Hours used: 960 - 1080 hours (40-45 days)\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid): Unknown\n\n*This model card was written by: Boris Dayma, Margaret Mitchell, Ezi Ozoani, Marissa Gerchick, Irene Solaiman, Clémentine Fourrier, Sasha Luccioni, Emily Witko, Nazneen Rajani, and Julian Herrera.*" ]
[ "TAGS\n#transformers #jax #dallebart #text-to-image #en #arxiv-2102.08981 #arxiv-2012.09841 #arxiv-1910.13461 #arxiv-1910.09700 #license-apache-2.0 #co2_eq_emissions #has_space #region-us \n", "# DALL·E Mini Model Card\n\nThis model card focuses on the model associated with the DALL·E mini space on Hugging Face, available here. The app is called “dalle-mini”, but incorporates “DALL·E Mini’’ and “DALL·E Mega” models (further details on this distinction forthcoming).\n\nThe DALL·E Mega model is the largest version of DALLE Mini. For more information specific to DALL·E Mega, see the DALL·E Mega model card.", "## Model Details\n\n* Developed by: Boris Dayma, Suraj Patil, Pedro Cuenca, Khalid Saifullah, Tanishq Abraham, Phúc Lê, Luke, Luke Melas, Ritobrata Ghosh\n* Model type: Transformer-based text-to-image generation model\n* Language(s): English\n* License: Apache 2.0\n* Model Description: This is a model that can be used to generate images based on text prompts. As the model developers wrote in the project report about DALL·E mini, “OpenAI had the first impressive model for generating images with DALL·E. DALL·E mini is an attempt at reproducing those results with an open-source model.”\n* Resources for more information: See OpenAI’s website for more information about DALL·E, including the DALL·E model card. See the project report for more information from the model’s developers. To learn more about DALL·E Mega, see the DALL·E Mega training journal.\n* Cite as:", "## Uses", "### Direct Use\n\nThe model is intended to be used to generate images based on text prompts for research and personal consumption. Intended uses include supporting creativity, creating humorous content, and providing generations for people curious about the model’s behavior. Intended uses exclude those described in the Misuse and Out-of-Scope Use section.", "### Downstream Use\n\nThe model could also be used for downstream use cases, including:\n* Research efforts, such as probing and better understanding the limitations and biases of generative models to further improve the state of science\n* Development of educational or creative tools\n* Generation of artwork and use in design and artistic processes. \n* Other uses that are newly discovered by users. This currently includes poetry illustration (give a poem as prompt), fan art (putting a character in various other visual universes), visual puns, fairy tale illustrations (give a fantasy situation as prompt), concept mashups (applying a texture to something completely different), style transfers (portraits in the style of), … We hope you will find your own application!\n\nDownstream uses exclude the uses described in Misuse and Out-of-Scope Use.", "### Misuse, Malicious Use, and Out-of-Scope Use\n\nThe model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.", "#### Out-of-Scope Use\n\nThe model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.", "#### Misuse and Malicious Use \n\nUsing the model to generate content that is cruel to individuals is a misuse of this model. This includes:\n* Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.\n* Intentionally promoting or propagating discriminatory content or harmful stereotypes.\n* Impersonating individuals without their consent.\n* Sexual content without consent of the people who might see it.\n* Mis- and disinformation\n* Representations of egregious violence and gore\n* Sharing of copyrighted or licensed material in violation of its terms of use.\n* Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.", "## Limitations and Bias", "### Limitations\n\nThe model developers discuss the limitations of the model further in the DALL·E Mini technical report:\n* Faces and people in general are not generated properly.\n* Animals are usually unrealistic.\n* It is hard to predict where the model excels or falls short…Good prompt engineering will lead to the best results.\n* The model has only been trained with English descriptions and will not perform as well in other languages", "### Bias \n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\nThe model was trained on unfiltered data from the Internet, limited to pictures with English descriptions. Text and images from communities and cultures using other languages were not utilized. This affects all output of the model, with white and Western culture asserted as a default, and the model’s ability to generate content using non-English prompts is observably lower quality than prompts in English.\n\nWhile the capabilities of image generation models are impressive, they may also reinforce or exacerbate societal biases. The extent and nature of the biases of DALL·E Mini and DALL·E Mega models have yet to be fully documented, but initial testing demonstrates that they may generate images that contain negative stereotypes against minoritized groups. Work to analyze the nature and extent of the models’ biases and limitations is ongoing.\n\nOur current analyses demonstrate that:\n* Images generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.\n* When the model generates images with people in them, it tends to output people who we perceive to be white, while people of color are underrepresented. \n* Images generated by the model can contain biased content that depicts power differentials between people of color and people who are white, with white people in positions of privilege.\n* The model is generally only usable for generating images based on text in English, limiting accessibility of the model for non-English speakers and potentially contributing to the biases in images generated by the model.\n\nThe technical report discusses these issues in more detail, and also highlights potential sources of bias in the model development process.", "### Limitations and Bias Recommendations\n\n* Users (both direct and downstream) should be made aware of the biases and limitations.\n* Content that is potentially problematic should be filtered out, e.g., via automated models that detect violence or pornography.\n* Further work on this model should include methods for balanced and just representations of people and cultures, for example, by curating the training dataset to be both diverse and inclusive.", "## Training", "### Training Data\n\nThe model developers used 3 datasets for the model:\n* Conceptual Captions Dataset, which contains 3 million image and caption pairs.\n* Conceptual 12M, which contains 12 million image and caption pairs.\n* The OpenAI subset of YFCC100M, which contains about 15 million images and that we further sub-sampled to 2 million images due to limitations in storage space. They used both title and description as caption and removed html tags, new lines and extra spaces.\n\nFor fine-tuning the image encoder, a subset of 2 million images were used.\nAll images (about 15 million) were used for training the Seq2Seq model.", "### Training Procedure\n\nAs described further in the technical report for DALL·E Mini, during training, images and descriptions are both available and pass through the system as follows:\n* Images are encoded through a VQGAN encoder, which turns images into a sequence of tokens.\n* Descriptions are encoded through a BART encoder.\n* The output of the BART encoder and encoded images are fed through the BART decoder, which is an auto-regressive model whose goal is to predict the next token.\n* Loss is the softmax cross-entropy) between the model prediction logits and the actual image encodings from the VQGAN.\n\nThe simplified training procedure for DALL·E Mega is as follows: \n\n* Hardware: 1 pod TPU v3-256 = 32 nodes of TPU VM v3-8 (8 TPU per node) = 256 TPU v3\n* Optimizer: Distributed Shampoo\n* Model Partition Specificiations: 8 model parallel x 32 data parallel\n* Batch: 44 samples per model x 32 data parallel x 3 gradient accumulation steps = 4224 increasing samples per update\n* Learning rate: warmup to 0.0001 for 10,000 steps and then kept constant until plateau\n* Gradient checkpointing used on each Encoder/Decoder layer (ie, MHA + FFN)\n* Distributed Shampoo + Normformer Optimizations have proved to be effective and efficiently scaling this model. \n* It should also be noted that the learning rate and other parameters are sometimes adjusted on the fly, and batch size increased over time as well.\n\nThere is more information about the full procedure and technical material in the DALL·E Mega training journal.", "## Evaluation Results\n\nThe model developers discuss their results extensively in their technical report for DALL·E Mini, which provides comparisons between DALL·E Mini’s results with DALL·E-pytorch, OpenAI’s DALL·E, and models consisting of a generator coupled with the CLIP neural network model. \n\nFor evaluation results related to DALL·E Mega, see this technical report.", "## Environmental Impact", "### DALL·E Mini Estimated Emissions\n\n*The model is 27 times smaller than the original DALL·E and was trained on a single TPU v3-8 for only 3 days.*\n\nBased on that information, we estimate the following CO2 emissions using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.\n\n* Hardware Type: TPU v3-8\n* Hours used: 72 (3 days)\n* Cloud Provider: GCP (as mentioned in the technical report)\n* Compute Region: us-east1 (provided by model developers)\n* Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid): 30.16 kg CO2 eq.", "### DALL·E Mega Estimated Emissions\n\nDALL·E Mega is still training. So far, as on June 9, 2022, the model developers report that DALL·E Mega has been training for about 40-45 days on a TPU v3-256. Using those numbers, we estimate the following CO2 emissions using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.\n\n* Hardware Type: TPU v3-256\n* Hours used: 960 - 1080 hours (40-45 days)\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid): Unknown\n\n*This model card was written by: Boris Dayma, Margaret Mitchell, Ezi Ozoani, Marissa Gerchick, Irene Solaiman, Clémentine Fourrier, Sasha Luccioni, Emily Witko, Nazneen Rajani, and Julian Herrera.*" ]
null
transformers
## VQGAN-f16-16384 ### Model Description This is a Flax/JAX implementation of VQGAN, which learns a codebook of context-rich visual parts by leveraging both the use of convolutional methods and transformers. It was introduced in [Taming Transformers for High-Resolution Image Synthesis](https://compvis.github.io/taming-transformers/) ([CVPR paper](https://openaccess.thecvf.com/content/CVPR2021/html/Esser_Taming_Transformers_for_High-Resolution_Image_Synthesis_CVPR_2021_paper.html)). The model allows the encoding of images as a fixed-length sequence of tokens taken from the codebook. This version of the model uses a reduction factor `f=16` and a vocabulary of `16,384` tokens. As an example of how the reduction factor works, images of size `256x256` are encoded to sequences of `256` tokens: `256/16 * 256/16`. Images of `512x512` would result in sequences of `1024` tokens. This model was ported to JAX using [a checkpoint trained on ImageNet](https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887/). ### How to Use The checkpoint can be loaded using [Suraj Patil's implementation](https://github.com/patil-suraj/vqgan-jax) of `VQModel`. ### Other This model can be used as part of the implementation of [DALL·E mini](https://github.com/borisdayma/dalle-mini). Our [report](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-mini--Vmlldzo4NjIxODA) contains more details on how to leverage it in an image encoding / generation pipeline.
{}
dalle-mini/vqgan_imagenet_f16_16384
null
[ "transformers", "jax", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #jax #endpoints_compatible #has_space #region-us
## VQGAN-f16-16384 ### Model Description This is a Flax/JAX implementation of VQGAN, which learns a codebook of context-rich visual parts by leveraging both the use of convolutional methods and transformers. It was introduced in Taming Transformers for High-Resolution Image Synthesis (CVPR paper). The model allows the encoding of images as a fixed-length sequence of tokens taken from the codebook. This version of the model uses a reduction factor 'f=16' and a vocabulary of '16,384' tokens. As an example of how the reduction factor works, images of size '256x256' are encoded to sequences of '256' tokens: '256/16 * 256/16'. Images of '512x512' would result in sequences of '1024' tokens. This model was ported to JAX using a checkpoint trained on ImageNet. ### How to Use The checkpoint can be loaded using Suraj Patil's implementation of 'VQModel'. ### Other This model can be used as part of the implementation of DALL·E mini. Our report contains more details on how to leverage it in an image encoding / generation pipeline.
[ "## VQGAN-f16-16384", "### Model Description\n\nThis is a Flax/JAX implementation of VQGAN, which learns a codebook of context-rich visual parts by leveraging both the use of convolutional methods and transformers. It was introduced in Taming Transformers for High-Resolution Image Synthesis (CVPR paper).\n\nThe model allows the encoding of images as a fixed-length sequence of tokens taken from the codebook.\n\nThis version of the model uses a reduction factor 'f=16' and a vocabulary of '16,384' tokens.\n\nAs an example of how the reduction factor works, images of size '256x256' are encoded to sequences of '256' tokens: '256/16 * 256/16'. Images of '512x512' would result in sequences of '1024' tokens.\n\nThis model was ported to JAX using a checkpoint trained on ImageNet.", "### How to Use\n\nThe checkpoint can be loaded using Suraj Patil's implementation of 'VQModel'.", "### Other\n\nThis model can be used as part of the implementation of DALL·E mini. Our report contains more details on how to leverage it in an image encoding / generation pipeline." ]
[ "TAGS\n#transformers #jax #endpoints_compatible #has_space #region-us \n", "## VQGAN-f16-16384", "### Model Description\n\nThis is a Flax/JAX implementation of VQGAN, which learns a codebook of context-rich visual parts by leveraging both the use of convolutional methods and transformers. It was introduced in Taming Transformers for High-Resolution Image Synthesis (CVPR paper).\n\nThe model allows the encoding of images as a fixed-length sequence of tokens taken from the codebook.\n\nThis version of the model uses a reduction factor 'f=16' and a vocabulary of '16,384' tokens.\n\nAs an example of how the reduction factor works, images of size '256x256' are encoded to sequences of '256' tokens: '256/16 * 256/16'. Images of '512x512' would result in sequences of '1024' tokens.\n\nThis model was ported to JAX using a checkpoint trained on ImageNet.", "### How to Use\n\nThe checkpoint can be loaded using Suraj Patil's implementation of 'VQModel'.", "### Other\n\nThis model can be used as part of the implementation of DALL·E mini. Our report contains more details on how to leverage it in an image encoding / generation pipeline." ]
fill-mask
transformers
# HIV_BERT model ## Table of Contents - [Summary](#model-summary) - [Model Description](#model-description) - [Intended Uses & Limitations](#intended-uses-&-limitations) - [How to Use](#how-to-use) - [Training Data](#training-data) - [Training Procedure](#training-procedure) - [Preprocessing](#preprocessing) - [Training](#training) - [Evaluation Results](#evaluation-results) - [BibTeX Entry and Citation Info](#bibtex-entry-and-citation-info) ## Summary The HIV-BERT model was trained as a refinement of the [ProtBert-BFD model](https://huggingface.co/Rostlab/prot_bert_bfd) for HIV centric tasks. It was refined with whole viral genomes from the [Los Alamos HIV Sequence Database](https://www.hiv.lanl.gov/content/sequence/HIV/mainpage.html). This pretraining is important for HIV related tasks as the original BFD database contains few viral proteins making it sub-optimal when used as the basis for transfer learning tasks. This model and other related HIV prediction tasks have been published (link). ## Model Description Like the original [ProtBert-BFD model](https://huggingface.co/Rostlab/prot_bert_bfd), this model encodes each amino acid as an individual token. This model was trained using Masked Language Modeling: a process in which a random set of tokens are masked with the model trained on their prediction. This model was trained using the damlab/hiv-flt dataset with 256 amino acid chunks and a 15% mask rate. ## Intended Uses & Limitations As a masked language model this tool can be used to predict expected mutations using a masking approach. This could be used to identify highly mutated sequences, sequencing artifacts, or other contexts. As a BERT model, this tool can also be used as the base for transfer learning. This pretrained model could be used as the base when developing HIV-specific classification tasks. ## How to use As this is a BERT-style Masked Language learner, it can be used to determine the most likely amino acid at a masked position. ```python from transformers import pipeline unmasker = pipeline("fill-mask", model="damlab/HIV_FLT") unmasker(f"C T R P N [MASK] N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C") [ { "score": 0.9581968188285828, "token": 17, "token_str": "N", "sequence": "C T R P N N N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C" }, { "score": 0.022986575961112976, "token": 12, "token_str": "K", "sequence": "C T R P N K N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C" }, { "score": 0.003997281193733215, "token": 14, "token_str": "D", "sequence": "C T R P N D N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C" }, { "score": 0.003636382520198822, "token": 15, "token_str": "T", "sequence": "C T R P N T N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C" }, { "score": 0.002701344434171915, "token": 10, "token_str": "S", "sequence": "C T R P N S N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C" } ] ``` ## Training Data The dataset [damlab/HIV_FLT](https://huggingface.co/datasets/damlab/HIV_FLT) was used to refine the original [rostlab/Prot-bert-bfd](https://huggingface.co/Rostlab/prot_bert_bfd). This dataset contains 1790 full HIV genomes from across the globe. When translated, these genomes contain approximately 3.9 million amino-acid tokens. ## Training Procedure ### Preprocessing As with the [rostlab/Prot-bert-bfd](https://huggingface.co/Rostlab/prot_bert_bfd) model, the rare amino acids U, Z, O, and B were converted to X and spaces were added between each amino acid. All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation. ### Training Training was performed with the HuggingFace training module using the MaskedLM data loader with a 15% masking rate. The learning rate was set at E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset. ## BibTeX Entry and Citation Info [More Information Needed]
{"license": "mit", "datasets": ["damlab/HIV_FLT"], "metrics": ["accuracy"], "widget": [{"text": "C T R P N N N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C", "example_title": "V3"}, {"text": "M E P V D P R L E P W K H P G S Q P K T A C T N C Y C K K C C F H C Q V C F I T K A L G I S Y G R K K R R Q R R R A H Q N S Q T H Q A S L S K Q P T S Q P R G D P T G P K E S K K K V E R E T E T D P F D", "example_title": "Tat"}, {"text": "P Q I T L W Q R P L V T I K I G G Q L K E A L L D T G A D D T V L E E M N L P G R W K P K M I G G I G G F I K V R Q Y D Q I L I E I C G H K A I G T V L V G P T P V N I I G R N L L T Q I G C T L N F", "example_title": "PR"}]}
damlab/HIV_BERT
null
[ "transformers", "pytorch", "bert", "fill-mask", "dataset:damlab/HIV_FLT", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #fill-mask #dataset-damlab/HIV_FLT #license-mit #autotrain_compatible #endpoints_compatible #region-us
# HIV_BERT model ## Table of Contents - Summary - Model Description - Intended Uses & Limitations - How to Use - Training Data - Training Procedure - Preprocessing - Training - Evaluation Results - BibTeX Entry and Citation Info ## Summary The HIV-BERT model was trained as a refinement of the ProtBert-BFD model for HIV centric tasks. It was refined with whole viral genomes from the Los Alamos HIV Sequence Database. This pretraining is important for HIV related tasks as the original BFD database contains few viral proteins making it sub-optimal when used as the basis for transfer learning tasks. This model and other related HIV prediction tasks have been published (link). ## Model Description Like the original ProtBert-BFD model, this model encodes each amino acid as an individual token. This model was trained using Masked Language Modeling: a process in which a random set of tokens are masked with the model trained on their prediction. This model was trained using the damlab/hiv-flt dataset with 256 amino acid chunks and a 15% mask rate. ## Intended Uses & Limitations As a masked language model this tool can be used to predict expected mutations using a masking approach. This could be used to identify highly mutated sequences, sequencing artifacts, or other contexts. As a BERT model, this tool can also be used as the base for transfer learning. This pretrained model could be used as the base when developing HIV-specific classification tasks. ## How to use As this is a BERT-style Masked Language learner, it can be used to determine the most likely amino acid at a masked position. ## Training Data The dataset damlab/HIV_FLT was used to refine the original rostlab/Prot-bert-bfd. This dataset contains 1790 full HIV genomes from across the globe. When translated, these genomes contain approximately 3.9 million amino-acid tokens. ## Training Procedure ### Preprocessing As with the rostlab/Prot-bert-bfd model, the rare amino acids U, Z, O, and B were converted to X and spaces were added between each amino acid. All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation. ### Training Training was performed with the HuggingFace training module using the MaskedLM data loader with a 15% masking rate. The learning rate was set at E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset. ## BibTeX Entry and Citation Info
[ "# HIV_BERT model", "## Table of Contents\r\n- Summary\r\n- Model Description\r\n- Intended Uses & Limitations\r\n- How to Use\r\n- Training Data\r\n- Training Procedure\r\n - Preprocessing\r\n - Training\r\n- Evaluation Results\r\n- BibTeX Entry and Citation Info", "## Summary\r\n\r\nThe HIV-BERT model was trained as a refinement of the ProtBert-BFD model for HIV centric tasks. It was refined with whole viral genomes from the Los Alamos HIV Sequence Database. This pretraining is important for HIV related tasks as the original BFD database contains few viral proteins making it sub-optimal when used as the basis for transfer learning tasks. This model and other related HIV prediction tasks have been published (link).", "## Model Description\r\n\r\nLike the original ProtBert-BFD model, this model encodes each amino acid as an individual token. This model was trained using Masked Language Modeling: a process in which a random set of tokens are masked with the model trained on their prediction. This model was trained using the damlab/hiv-flt dataset with 256 amino acid chunks and a 15% mask rate.", "## Intended Uses & Limitations\r\n\r\nAs a masked language model this tool can be used to predict expected mutations using a masking approach. This could be used to identify highly mutated sequences, sequencing artifacts, or other contexts. As a BERT model, this tool can also be used as the base for transfer learning. This pretrained model could be used as the base when developing HIV-specific classification tasks.", "## How to use\r\n\r\nAs this is a BERT-style Masked Language learner, it can be used to determine the most likely amino acid at a masked position.", "## Training Data\r\n\r\nThe dataset damlab/HIV_FLT was used to refine the original rostlab/Prot-bert-bfd. This dataset contains 1790 full HIV genomes from across the globe. When translated, these genomes contain approximately 3.9 million amino-acid tokens.", "## Training Procedure", "### Preprocessing\r\n\r\nAs with the rostlab/Prot-bert-bfd model, the rare amino acids U, Z, O, and B were converted to X and spaces were added between each amino acid. All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation.", "### Training\r\n\r\nTraining was performed with the HuggingFace training module using the MaskedLM data loader with a 15% masking rate. The learning rate was set at E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset.", "## BibTeX Entry and Citation Info" ]
[ "TAGS\n#transformers #pytorch #bert #fill-mask #dataset-damlab/HIV_FLT #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# HIV_BERT model", "## Table of Contents\r\n- Summary\r\n- Model Description\r\n- Intended Uses & Limitations\r\n- How to Use\r\n- Training Data\r\n- Training Procedure\r\n - Preprocessing\r\n - Training\r\n- Evaluation Results\r\n- BibTeX Entry and Citation Info", "## Summary\r\n\r\nThe HIV-BERT model was trained as a refinement of the ProtBert-BFD model for HIV centric tasks. It was refined with whole viral genomes from the Los Alamos HIV Sequence Database. This pretraining is important for HIV related tasks as the original BFD database contains few viral proteins making it sub-optimal when used as the basis for transfer learning tasks. This model and other related HIV prediction tasks have been published (link).", "## Model Description\r\n\r\nLike the original ProtBert-BFD model, this model encodes each amino acid as an individual token. This model was trained using Masked Language Modeling: a process in which a random set of tokens are masked with the model trained on their prediction. This model was trained using the damlab/hiv-flt dataset with 256 amino acid chunks and a 15% mask rate.", "## Intended Uses & Limitations\r\n\r\nAs a masked language model this tool can be used to predict expected mutations using a masking approach. This could be used to identify highly mutated sequences, sequencing artifacts, or other contexts. As a BERT model, this tool can also be used as the base for transfer learning. This pretrained model could be used as the base when developing HIV-specific classification tasks.", "## How to use\r\n\r\nAs this is a BERT-style Masked Language learner, it can be used to determine the most likely amino acid at a masked position.", "## Training Data\r\n\r\nThe dataset damlab/HIV_FLT was used to refine the original rostlab/Prot-bert-bfd. This dataset contains 1790 full HIV genomes from across the globe. When translated, these genomes contain approximately 3.9 million amino-acid tokens.", "## Training Procedure", "### Preprocessing\r\n\r\nAs with the rostlab/Prot-bert-bfd model, the rare amino acids U, Z, O, and B were converted to X and spaces were added between each amino acid. All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation.", "### Training\r\n\r\nTraining was performed with the HuggingFace training module using the MaskedLM data loader with a 15% masking rate. The learning rate was set at E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset.", "## BibTeX Entry and Citation Info" ]
text-classification
transformers
# HIV_PR_resist model ## Table of Contents - [Summary](#model-summary) - [Model Description](#model-description) - [Intended Uses & Limitations](#intended-uses-&-limitations) - [How to Use](#how-to-use) - [Training Data](#training-data) - [Training Procedure](#training-procedure) - [Preprocessing](#preprocessing) - [Training](#training) - [Evaluation Results](#evaluation-results) - [BibTeX Entry and Citation Info](#bibtex-entry-and-citation-info) ## Summary The HIV-BERT-Protease-Resistance model was trained as a refinement of the HIV-BERT model (insert link) and serves to better predict whether an HIV protease sequence will be resistant to certain protease inhibitors. HIV-BERT is a model refined from the [ProtBert-BFD model](https://huggingface.co/Rostlab/prot_bert_bfd) to better fulfill HIV-centric tasks. This model was then trained using HIV protease sequences from the [Stanford HIV Genotype-Phenotype Database](https://hivdb.stanford.edu/pages/genotype-phenotype.html), allowing even more precise prediction protease inhibitor resistance than the HIV-BERT model can provide. ## Model Description The HIV-BERT-Protease-Resistance model is intended to predict the likelihood that an HIV protease sequence will be resistant to protease inhibitors. The protease gene is responsible for cleaving viral proteins into their active states, and as such is an ideal target for antiretroviral therapy. Annotation programs designed to predict and identify protease resistance using known mutations already exist, however with varied results. The HIV-BERT-Protease-Resistance model is designed to provide an alternative, NLP-based mechanism for predicting resistance mutations when provided with an HIV protease sequence. ## Intended Uses & Limitations This tool can be used as a predictor of protease resistance mutations within an HIV genomic sequence. It should not be considered a clinical diagnostic tool. ## How to use *Prediction example of protease sequences* ## Training Data This model was trained using the [damlab/HIV-PI dataset](https://huggingface.co/datasets/damlab/HIV_PI) using the 0th fold. The dataset consists of 1959 sequences (approximately 99 tokens each) extracted from the Stanford HIV Genotype-Phenotype Database. ## Training Procedure ### Preprocessing As with the [rostlab/Prot-bert-bfd model](https://huggingface.co/Rostlab/prot_bert_bfd), the rare amino acids U, Z, O, and B were converted to X and spaces were added between each amino acid. All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation. ### Training The [damlab/HIV-BERT model](https://huggingface.co/damlab/HIV_BERT) was used as the initial weights for an AutoModelforClassificiation. The model was trained with a learning rate of 1E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset. As this is a multiple classification task (a protein can be resistant to multiple drugs) the loss was calculated as the Binary Cross Entropy for each category. The BCE was weighted by the inverse of the class ratio to balance the weight across the class imbalance. ## Evaluation Results *Need to add* ## BibTeX Entry and Citation Info [More Information Needed]
{"license": "mit"}
damlab/HIV_PR_resist
null
[ "transformers", "pytorch", "bert", "text-classification", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #text-classification #license-mit #autotrain_compatible #endpoints_compatible #region-us
# HIV_PR_resist model ## Table of Contents - Summary - Model Description - Intended Uses & Limitations - How to Use - Training Data - Training Procedure - Preprocessing - Training - Evaluation Results - BibTeX Entry and Citation Info ## Summary The HIV-BERT-Protease-Resistance model was trained as a refinement of the HIV-BERT model (insert link) and serves to better predict whether an HIV protease sequence will be resistant to certain protease inhibitors. HIV-BERT is a model refined from the ProtBert-BFD model to better fulfill HIV-centric tasks. This model was then trained using HIV protease sequences from the Stanford HIV Genotype-Phenotype Database, allowing even more precise prediction protease inhibitor resistance than the HIV-BERT model can provide. ## Model Description The HIV-BERT-Protease-Resistance model is intended to predict the likelihood that an HIV protease sequence will be resistant to protease inhibitors. The protease gene is responsible for cleaving viral proteins into their active states, and as such is an ideal target for antiretroviral therapy. Annotation programs designed to predict and identify protease resistance using known mutations already exist, however with varied results. The HIV-BERT-Protease-Resistance model is designed to provide an alternative, NLP-based mechanism for predicting resistance mutations when provided with an HIV protease sequence. ## Intended Uses & Limitations This tool can be used as a predictor of protease resistance mutations within an HIV genomic sequence. It should not be considered a clinical diagnostic tool. ## How to use *Prediction example of protease sequences* ## Training Data This model was trained using the damlab/HIV-PI dataset using the 0th fold. The dataset consists of 1959 sequences (approximately 99 tokens each) extracted from the Stanford HIV Genotype-Phenotype Database. ## Training Procedure ### Preprocessing As with the rostlab/Prot-bert-bfd model, the rare amino acids U, Z, O, and B were converted to X and spaces were added between each amino acid. All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation. ### Training The damlab/HIV-BERT model was used as the initial weights for an AutoModelforClassificiation. The model was trained with a learning rate of 1E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset. As this is a multiple classification task (a protein can be resistant to multiple drugs) the loss was calculated as the Binary Cross Entropy for each category. The BCE was weighted by the inverse of the class ratio to balance the weight across the class imbalance. ## Evaluation Results *Need to add* ## BibTeX Entry and Citation Info
[ "# HIV_PR_resist model", "## Table of Contents\r\n- Summary\r\n- Model Description\r\n- Intended Uses & Limitations\r\n- How to Use\r\n- Training Data\r\n- Training Procedure\r\n - Preprocessing\r\n - Training\r\n- Evaluation Results\r\n- BibTeX Entry and Citation Info", "## Summary\r\n\r\nThe HIV-BERT-Protease-Resistance model was trained as a refinement of the HIV-BERT model (insert link) and serves to better predict whether an HIV protease sequence will be resistant to certain protease inhibitors. HIV-BERT is a model refined from the ProtBert-BFD model to better fulfill HIV-centric tasks. This model was then trained using HIV protease sequences from the Stanford HIV Genotype-Phenotype Database, allowing even more precise prediction protease inhibitor resistance than the HIV-BERT model can provide.", "## Model Description\r\n\r\nThe HIV-BERT-Protease-Resistance model is intended to predict the likelihood that an HIV protease sequence will be resistant to protease inhibitors. The protease gene is responsible for cleaving viral proteins into their active states, and as such is an ideal target for antiretroviral therapy. Annotation programs designed to predict and identify protease resistance using known mutations already exist, however with varied results. The HIV-BERT-Protease-Resistance model is designed to provide an alternative, NLP-based mechanism for predicting resistance mutations when provided with an HIV protease sequence.", "## Intended Uses & Limitations\r\n\r\nThis tool can be used as a predictor of protease resistance mutations within an HIV genomic sequence. It should not be considered a clinical diagnostic tool.", "## How to use\r\n\r\n*Prediction example of protease sequences*", "## Training Data\r\n\r\nThis model was trained using the damlab/HIV-PI dataset using the 0th fold. The dataset consists of 1959 sequences (approximately 99 tokens each) extracted from the Stanford HIV Genotype-Phenotype Database.", "## Training Procedure", "### Preprocessing\r\n\r\nAs with the rostlab/Prot-bert-bfd model, the rare amino acids U, Z, O, and B were converted to X and spaces were added between each amino acid. All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation.", "### Training\r\n\r\nThe damlab/HIV-BERT model was used as the initial weights for an AutoModelforClassificiation. The model was trained with a learning rate of 1E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset. As this is a multiple classification task (a protein can be resistant to multiple drugs) the loss was calculated as the Binary Cross Entropy for each category. The BCE was weighted by the inverse of the class ratio to balance the weight across the class imbalance.", "## Evaluation Results\r\n\r\n*Need to add*", "## BibTeX Entry and Citation Info" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# HIV_PR_resist model", "## Table of Contents\r\n- Summary\r\n- Model Description\r\n- Intended Uses & Limitations\r\n- How to Use\r\n- Training Data\r\n- Training Procedure\r\n - Preprocessing\r\n - Training\r\n- Evaluation Results\r\n- BibTeX Entry and Citation Info", "## Summary\r\n\r\nThe HIV-BERT-Protease-Resistance model was trained as a refinement of the HIV-BERT model (insert link) and serves to better predict whether an HIV protease sequence will be resistant to certain protease inhibitors. HIV-BERT is a model refined from the ProtBert-BFD model to better fulfill HIV-centric tasks. This model was then trained using HIV protease sequences from the Stanford HIV Genotype-Phenotype Database, allowing even more precise prediction protease inhibitor resistance than the HIV-BERT model can provide.", "## Model Description\r\n\r\nThe HIV-BERT-Protease-Resistance model is intended to predict the likelihood that an HIV protease sequence will be resistant to protease inhibitors. The protease gene is responsible for cleaving viral proteins into their active states, and as such is an ideal target for antiretroviral therapy. Annotation programs designed to predict and identify protease resistance using known mutations already exist, however with varied results. The HIV-BERT-Protease-Resistance model is designed to provide an alternative, NLP-based mechanism for predicting resistance mutations when provided with an HIV protease sequence.", "## Intended Uses & Limitations\r\n\r\nThis tool can be used as a predictor of protease resistance mutations within an HIV genomic sequence. It should not be considered a clinical diagnostic tool.", "## How to use\r\n\r\n*Prediction example of protease sequences*", "## Training Data\r\n\r\nThis model was trained using the damlab/HIV-PI dataset using the 0th fold. The dataset consists of 1959 sequences (approximately 99 tokens each) extracted from the Stanford HIV Genotype-Phenotype Database.", "## Training Procedure", "### Preprocessing\r\n\r\nAs with the rostlab/Prot-bert-bfd model, the rare amino acids U, Z, O, and B were converted to X and spaces were added between each amino acid. All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation.", "### Training\r\n\r\nThe damlab/HIV-BERT model was used as the initial weights for an AutoModelforClassificiation. The model was trained with a learning rate of 1E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset. As this is a multiple classification task (a protein can be resistant to multiple drugs) the loss was calculated as the Binary Cross Entropy for each category. The BCE was weighted by the inverse of the class ratio to balance the weight across the class imbalance.", "## Evaluation Results\r\n\r\n*Need to add*", "## BibTeX Entry and Citation Info" ]
text-classification
transformers
# HIV_V3_coreceptor model ## Table of Contents - [Summary](#model-summary) - [Model Description](#model-description) - [Intended Uses & Limitations](#intended-uses-&-limitations) - [How to Use](#how-to-use) - [Training Data](#training-data) - [Training Procedure](#training-procedure) - [Preprocessing](#preprocessing) - [Training](#training) - [Evaluation Results](#evaluation-results) - [BibTeX Entry and Citation Info](#bibtex-entry-and-citation-info) ## Summary The HIV-BERT-Coreceptor model was trained as a refinement of the [HIV-BERT model](https://huggingface.co/damlab/HIV_BERT) and serves to better predict HIV V3 coreceptor tropism. HIV-BERT is a model refined from the [ProtBert-BFD model](https://huggingface.co/Rostlab/prot_bert_bfd) to better fulfill HIV-centric tasks. This model was then trained using HIV V3 sequences from the [Los Alamos HIV Sequence Database](https://www.hiv.lanl.gov/content/sequence/HIV/mainpage.html), allowing even more precise prediction of V3 coreceptor tropism than the HIV-BERT model can provide. ## Model Description The HIV-BERT-Coreceptor model is intended to predict the Co-receptor tropism of HIV from a segment of the envelope protein. These envelope proteins encapsulate the virus and interact with the host cell through the human CD4 receptor. HIV then requires the interaction of one, of two, co-receptors: CCR5 or CXCR4. The availability of these co-receptors on different cell types allows the virus to invade different areas of the body and evade antiretroviral therapy. The 3rd variable loop of the envelope protein, the V3 loop, is responsible for this interaction. Given a V3 loop sequence, the HIV-BERT-Coreceptor model will predict the likelihood of binding to each of these co-receptors. ## Intended Uses & Limitations This tool can be used as a predictor of HIV tropism from the Env-V3 loop. It can recognize both R5, X4, and dual tropic viruses natively. It should not be considered a clinical diagnostic tool. This tool was trained using the [Los Alamos HIV sequence dataset](https://www.hiv.lanl.gov/content/sequence/HIV/mainpage.html). Due to the sampling nature of this database, it is predominantly composed of subtype B sequences from North America and Europe with only minor contributions of Subtype C, A, and D. Currently, there was no effort made to balance the performance across these classes. As such, one should consider refinement with additional sequences to perform well on non-B sequences. ## How to use *Need to add* ## Training Data This model was trained using the [damlab/HIV_V3_coreceptor dataset](https://huggingface.co/datasets/damlab/HIV_V3_coreceptor) using the 0th fold. The dataset consists of 2935 V3 sequences (approximately 35 tokens each) extracted from the [Los Alamos HIV Sequence database](https://www.hiv.lanl.gov/content/sequence/HIV/mainpage.html). ## Training Procedure ### Preprocessing As with the [rostlab/Prot-bert-bfd model](https://huggingface.co/Rostlab/prot_bert_bfd), the rare amino acids U, Z, O, and B were converted to X and spaces were added between each amino acid. All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation. ### Training The [damlab/HIV-BERT model](https://huggingface.co/damlab/HIV_BERT) was used as the initial weights for an AutoModelforClassificiation. The model was trained with a learning rate of 1E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset. As this is a multiple classification task (a protein can bind to CCR5, CXCR4, neither, or both) the loss was calculated as the Binary Cross Entropy for each category. The BCE was weighted by the inverse of the class ratio to balance the weight across the class imbalance. ## Evaluation Results *Need to add* ## BibTeX Entry and Citation Info [More Information Needed]
{"license": "mit", "widget": [{"text": "C T R P N N N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C"}, {"text": "C T R P N N N T R K S I H I G P G R A F Y T T G Q I I G D I R Q A Y C"}, {"text": "C T R P N N N T R R S I R I G P G Q A F Y A T G D I I G D I R Q A H C"}, {"text": "C G R P N N H R I K G L R I G P G R A F F A M G A I G G G E I R Q A H C"}]}
damlab/HIV_V3_Coreceptor
null
[ "transformers", "pytorch", "bert", "text-classification", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #text-classification #license-mit #autotrain_compatible #endpoints_compatible #region-us
# HIV_V3_coreceptor model ## Table of Contents - Summary - Model Description - Intended Uses & Limitations - How to Use - Training Data - Training Procedure - Preprocessing - Training - Evaluation Results - BibTeX Entry and Citation Info ## Summary The HIV-BERT-Coreceptor model was trained as a refinement of the HIV-BERT model and serves to better predict HIV V3 coreceptor tropism. HIV-BERT is a model refined from the ProtBert-BFD model to better fulfill HIV-centric tasks. This model was then trained using HIV V3 sequences from the Los Alamos HIV Sequence Database, allowing even more precise prediction of V3 coreceptor tropism than the HIV-BERT model can provide. ## Model Description The HIV-BERT-Coreceptor model is intended to predict the Co-receptor tropism of HIV from a segment of the envelope protein. These envelope proteins encapsulate the virus and interact with the host cell through the human CD4 receptor. HIV then requires the interaction of one, of two, co-receptors: CCR5 or CXCR4. The availability of these co-receptors on different cell types allows the virus to invade different areas of the body and evade antiretroviral therapy. The 3rd variable loop of the envelope protein, the V3 loop, is responsible for this interaction. Given a V3 loop sequence, the HIV-BERT-Coreceptor model will predict the likelihood of binding to each of these co-receptors. ## Intended Uses & Limitations This tool can be used as a predictor of HIV tropism from the Env-V3 loop. It can recognize both R5, X4, and dual tropic viruses natively. It should not be considered a clinical diagnostic tool. This tool was trained using the Los Alamos HIV sequence dataset. Due to the sampling nature of this database, it is predominantly composed of subtype B sequences from North America and Europe with only minor contributions of Subtype C, A, and D. Currently, there was no effort made to balance the performance across these classes. As such, one should consider refinement with additional sequences to perform well on non-B sequences. ## How to use *Need to add* ## Training Data This model was trained using the damlab/HIV_V3_coreceptor dataset using the 0th fold. The dataset consists of 2935 V3 sequences (approximately 35 tokens each) extracted from the Los Alamos HIV Sequence database. ## Training Procedure ### Preprocessing As with the rostlab/Prot-bert-bfd model, the rare amino acids U, Z, O, and B were converted to X and spaces were added between each amino acid. All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation. ### Training The damlab/HIV-BERT model was used as the initial weights for an AutoModelforClassificiation. The model was trained with a learning rate of 1E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset. As this is a multiple classification task (a protein can bind to CCR5, CXCR4, neither, or both) the loss was calculated as the Binary Cross Entropy for each category. The BCE was weighted by the inverse of the class ratio to balance the weight across the class imbalance. ## Evaluation Results *Need to add* ## BibTeX Entry and Citation Info
[ "# HIV_V3_coreceptor model", "## Table of Contents\r\n- Summary\r\n- Model Description\r\n- Intended Uses & Limitations\r\n- How to Use\r\n- Training Data\r\n- Training Procedure\r\n - Preprocessing\r\n - Training\r\n- Evaluation Results\r\n- BibTeX Entry and Citation Info", "## Summary\r\n\r\nThe HIV-BERT-Coreceptor model was trained as a refinement of the HIV-BERT model and serves to better predict HIV V3 coreceptor tropism. HIV-BERT is a model refined from the ProtBert-BFD model to better fulfill HIV-centric tasks. This model was then trained using HIV V3 sequences from the Los Alamos HIV Sequence Database, allowing even more precise prediction of V3 coreceptor tropism than the HIV-BERT model can provide.", "## Model Description\r\n\r\nThe HIV-BERT-Coreceptor model is intended to predict the Co-receptor tropism of HIV from a segment of the envelope protein. These envelope proteins encapsulate the virus and interact with the host cell through the human CD4 receptor. HIV then requires the interaction of one, of two, co-receptors: CCR5 or CXCR4. The availability of these co-receptors on different cell types allows the virus to invade different areas of the body and evade antiretroviral therapy. The 3rd variable loop of the envelope protein, the V3 loop, is responsible for this interaction. Given a V3 loop sequence, the HIV-BERT-Coreceptor model will predict the likelihood of binding to each of these co-receptors.", "## Intended Uses & Limitations\r\n\r\nThis tool can be used as a predictor of HIV tropism from the Env-V3 loop. It can recognize both R5, X4, and dual tropic viruses natively. It should not be considered a clinical diagnostic tool. \r\n \r\nThis tool was trained using the Los Alamos HIV sequence dataset. Due to the sampling nature of this database, it is predominantly composed of subtype B sequences from North America and Europe with only minor contributions of Subtype C, A, and D. Currently, there was no effort made to balance the performance across these classes. As such, one should consider refinement with additional sequences to perform well on non-B sequences.", "## How to use\r\n\r\n*Need to add*", "## Training Data\r\n\r\nThis model was trained using the damlab/HIV_V3_coreceptor dataset using the 0th fold. The dataset consists of 2935 V3 sequences (approximately 35 tokens each) extracted from the Los Alamos HIV Sequence database.", "## Training Procedure", "### Preprocessing\r\n\r\nAs with the rostlab/Prot-bert-bfd model, the rare amino acids U, Z, O, and B were converted to X and spaces were added between each amino acid. All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation.", "### Training\r\n\r\nThe damlab/HIV-BERT model was used as the initial weights for an AutoModelforClassificiation. The model was trained with a learning rate of 1E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset. As this is a multiple classification task (a protein can bind to CCR5, CXCR4, neither, or both) the loss was calculated as the Binary Cross Entropy for each category. The BCE was weighted by the inverse of the class ratio to balance the weight across the class imbalance.", "## Evaluation Results\r\n\r\n*Need to add*", "## BibTeX Entry and Citation Info" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# HIV_V3_coreceptor model", "## Table of Contents\r\n- Summary\r\n- Model Description\r\n- Intended Uses & Limitations\r\n- How to Use\r\n- Training Data\r\n- Training Procedure\r\n - Preprocessing\r\n - Training\r\n- Evaluation Results\r\n- BibTeX Entry and Citation Info", "## Summary\r\n\r\nThe HIV-BERT-Coreceptor model was trained as a refinement of the HIV-BERT model and serves to better predict HIV V3 coreceptor tropism. HIV-BERT is a model refined from the ProtBert-BFD model to better fulfill HIV-centric tasks. This model was then trained using HIV V3 sequences from the Los Alamos HIV Sequence Database, allowing even more precise prediction of V3 coreceptor tropism than the HIV-BERT model can provide.", "## Model Description\r\n\r\nThe HIV-BERT-Coreceptor model is intended to predict the Co-receptor tropism of HIV from a segment of the envelope protein. These envelope proteins encapsulate the virus and interact with the host cell through the human CD4 receptor. HIV then requires the interaction of one, of two, co-receptors: CCR5 or CXCR4. The availability of these co-receptors on different cell types allows the virus to invade different areas of the body and evade antiretroviral therapy. The 3rd variable loop of the envelope protein, the V3 loop, is responsible for this interaction. Given a V3 loop sequence, the HIV-BERT-Coreceptor model will predict the likelihood of binding to each of these co-receptors.", "## Intended Uses & Limitations\r\n\r\nThis tool can be used as a predictor of HIV tropism from the Env-V3 loop. It can recognize both R5, X4, and dual tropic viruses natively. It should not be considered a clinical diagnostic tool. \r\n \r\nThis tool was trained using the Los Alamos HIV sequence dataset. Due to the sampling nature of this database, it is predominantly composed of subtype B sequences from North America and Europe with only minor contributions of Subtype C, A, and D. Currently, there was no effort made to balance the performance across these classes. As such, one should consider refinement with additional sequences to perform well on non-B sequences.", "## How to use\r\n\r\n*Need to add*", "## Training Data\r\n\r\nThis model was trained using the damlab/HIV_V3_coreceptor dataset using the 0th fold. The dataset consists of 2935 V3 sequences (approximately 35 tokens each) extracted from the Los Alamos HIV Sequence database.", "## Training Procedure", "### Preprocessing\r\n\r\nAs with the rostlab/Prot-bert-bfd model, the rare amino acids U, Z, O, and B were converted to X and spaces were added between each amino acid. All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation.", "### Training\r\n\r\nThe damlab/HIV-BERT model was used as the initial weights for an AutoModelforClassificiation. The model was trained with a learning rate of 1E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset. As this is a multiple classification task (a protein can bind to CCR5, CXCR4, neither, or both) the loss was calculated as the Binary Cross Entropy for each category. The BCE was weighted by the inverse of the class ratio to balance the weight across the class imbalance.", "## Evaluation Results\r\n\r\n*Need to add*", "## BibTeX Entry and Citation Info" ]
text-classification
transformers
# Model Card for [HIV_V3_bodysite] ## Table of Contents - [Table of Contents](#table-of-contents) - [Summary](#model-summary) - [Model Description](#model-description) - [Intended Uses & Limitations](#intended-uses-&-limitations) - [How to Use](#how-to-use) - [Training Data](#training-data) - [Training Procedure](#training-procedure) - [Preprocessing](#preprocessing) - [Training](#training) - [Evaluation Results](#evaluation-results) - [BibTeX Entry and Citation Info](#bibtex-entry-and-citation-info) ## Summary The HIV-BERT-Bodysite-Identification model was trained as a refinement of the HIV-BERT model (insert link) and serves to better predict the location that an HIV V3 loop sample was derived from. HIV-BERT is a model refined from the ProtBert-BFD model (https://huggingface.co/Rostlab/prot_bert_bfd) to better fulfill HIV-centric tasks. This model was then trained using HIV V3 sequences from the Los Alamos HIV Sequence Database (https://www.hiv.lanl.gov/content/sequence/HIV/mainpage.html), allowing even more precise prediction of body site location than the HIV-BERT model can provide. ## Model Description The HIV-BERT-Bodysite-Identification model is intended to predict the location as to where an HIV sequence was most likely derived from. Because HIV infects immune cells, it uses these as a means of rapidly spreading throughout the body. Thus, body site identification can help determine where exactly these HIV particles ultimately end up. This would be helpful when attempting to study HIV treatment strategies. When provided with an HIV genomic sequence, the HIV-BERT-Bodysite-Identification model can predict which tissue it was derived from. ## Intended Uses & Limitations This tool can be used as a predictor of which body site an HIV sample was derived from based on its genomic sequence. It should not be considered a clinical diagnostic tool. This tool was trained using the Los Alamos HIV sequence dataset (https://www.hiv.lanl.gov/content/sequence/HIV/mainpage.html). Due to the sampling nature of this database, it is predominantly composed of subtype B sequences from North America and Europe with only minor contributions of Subtype C, A, and D. Currently, there was no effort made to balance the performance across these classes. As such, one should consider refinement with additional sequences to perform well on non-B sequences. ## How to use This model is able to predict the likely bodysite from a V3 sequence. This may be use for surveillance of cells that are emerging from latent reservoirs. Remember, a sequence can come from multiple sites, they are not mutually exclusive. ```python from transformers import pipeline predictor = pipeline("text-classification", model="damlab/HIV_V3_bodysite") predictor(f"C T R P N N N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C") [ [ { "label": "periphery-tcell", "score": 0.29097115993499756 }, { "label": "periphery-monocyte", "score": 0.014322502538561821 }, { "label": "CNS", "score": 0.06870711594820023 }, { "label": "breast-milk", "score": 0.002785981632769108 }, { "label": "female-genitals", "score": 0.024997007101774216 }, { "label": "male-genitals", "score": 0.01040483545511961 }, { "label": "gastric", "score": 0.06872137635946274 }, { "label": "lung", "score": 0.04432062804698944 }, { "label": "organ", "score": 0.47476938366889954 } ] ] ``` ## Training Data This model was trained using the damlab/HIV_V3_bodysite dataset using the 0th fold. The dataset consists of 5510 sequences (approximately 35 tokens each) extracted from the Los Alamos HIV Sequence database. ## Training Procedure ### Preprocessing As with the rostlab/Prot-bert-bfd model, the rare amino acids U, Z, O, and B were converted to X and spaces were added between each amino acid. All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation. ### Training The damlab/HIV-BERT model was used as the initial weights for an AutoModelforClassificiation. The model was trained with a learning rate of 1E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset. As this is a multiple classification task (a protein can be found in multiple sites) the loss was calculated as the Binary Cross Entropy for each category. The BCE was weighted by the inverse of the class ratio to balance the weight across the class imbalance. ## Evaluation Results *Need to add* ## BibTeX Entry and Citation Info [More Information Needed]
{"datasets": ["damlab/HIV_V3_bodysite"], "metrics": ["accuracy"], "licence": "mit", "widget": [{"text": "T R P N N N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C", "example_title": "V3 Macrophage"}, {"text": "C T R P N N N T R K S I H I G P G R A F Y T T G Q I I G D I R Q A Y C", "example_title": "V3 T-cell"}]}
damlab/HIV_V3_bodysite
null
[ "transformers", "pytorch", "bert", "text-classification", "dataset:damlab/HIV_V3_bodysite", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #text-classification #dataset-damlab/HIV_V3_bodysite #autotrain_compatible #endpoints_compatible #region-us
# Model Card for [HIV_V3_bodysite] ## Table of Contents - Table of Contents - Summary - Model Description - Intended Uses & Limitations - How to Use - Training Data - Training Procedure - Preprocessing - Training - Evaluation Results - BibTeX Entry and Citation Info ## Summary The HIV-BERT-Bodysite-Identification model was trained as a refinement of the HIV-BERT model (insert link) and serves to better predict the location that an HIV V3 loop sample was derived from. HIV-BERT is a model refined from the ProtBert-BFD model (URL to better fulfill HIV-centric tasks. This model was then trained using HIV V3 sequences from the Los Alamos HIV Sequence Database (URL allowing even more precise prediction of body site location than the HIV-BERT model can provide. ## Model Description The HIV-BERT-Bodysite-Identification model is intended to predict the location as to where an HIV sequence was most likely derived from. Because HIV infects immune cells, it uses these as a means of rapidly spreading throughout the body. Thus, body site identification can help determine where exactly these HIV particles ultimately end up. This would be helpful when attempting to study HIV treatment strategies. When provided with an HIV genomic sequence, the HIV-BERT-Bodysite-Identification model can predict which tissue it was derived from. ## Intended Uses & Limitations This tool can be used as a predictor of which body site an HIV sample was derived from based on its genomic sequence. It should not be considered a clinical diagnostic tool. This tool was trained using the Los Alamos HIV sequence dataset (URL Due to the sampling nature of this database, it is predominantly composed of subtype B sequences from North America and Europe with only minor contributions of Subtype C, A, and D. Currently, there was no effort made to balance the performance across these classes. As such, one should consider refinement with additional sequences to perform well on non-B sequences. ## How to use This model is able to predict the likely bodysite from a V3 sequence. This may be use for surveillance of cells that are emerging from latent reservoirs. Remember, a sequence can come from multiple sites, they are not mutually exclusive. ## Training Data This model was trained using the damlab/HIV_V3_bodysite dataset using the 0th fold. The dataset consists of 5510 sequences (approximately 35 tokens each) extracted from the Los Alamos HIV Sequence database. ## Training Procedure ### Preprocessing As with the rostlab/Prot-bert-bfd model, the rare amino acids U, Z, O, and B were converted to X and spaces were added between each amino acid. All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation. ### Training The damlab/HIV-BERT model was used as the initial weights for an AutoModelforClassificiation. The model was trained with a learning rate of 1E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset. As this is a multiple classification task (a protein can be found in multiple sites) the loss was calculated as the Binary Cross Entropy for each category. The BCE was weighted by the inverse of the class ratio to balance the weight across the class imbalance. ## Evaluation Results *Need to add* ## BibTeX Entry and Citation Info
[ "# Model Card for [HIV_V3_bodysite]", "## Table of Contents\r\n- Table of Contents\r\n- Summary\r\n- Model Description\r\n- Intended Uses & Limitations\r\n- How to Use\r\n- Training Data\r\n- Training Procedure\r\n - Preprocessing\r\n - Training\r\n- Evaluation Results\r\n- BibTeX Entry and Citation Info", "## Summary\r\n\r\nThe HIV-BERT-Bodysite-Identification model was trained as a refinement of the HIV-BERT model (insert link) and serves to better predict the location that an HIV V3 loop sample was derived from. HIV-BERT is a model refined from the ProtBert-BFD model (URL to better fulfill HIV-centric tasks. This model was then trained using HIV V3 sequences from the Los Alamos HIV Sequence Database (URL allowing even more precise prediction of body site location than the HIV-BERT model can provide.", "## Model Description\r\n\r\nThe HIV-BERT-Bodysite-Identification model is intended to predict the location as to where an HIV sequence was most likely derived from. Because HIV infects immune cells, it uses these as a means of rapidly spreading throughout the body. Thus, body site identification can help determine where exactly these HIV particles ultimately end up. This would be helpful when attempting to study HIV treatment strategies. When provided with an HIV genomic sequence, the HIV-BERT-Bodysite-Identification model can predict which tissue it was derived from.", "## Intended Uses & Limitations\r\n\r\nThis tool can be used as a predictor of which body site an HIV sample was derived from based on its genomic sequence. It should not be considered a clinical diagnostic tool. \r\n \r\nThis tool was trained using the Los Alamos HIV sequence dataset (URL Due to the sampling nature of this database, it is predominantly composed of subtype B sequences from North America and Europe with only minor contributions of Subtype C, A, and D. Currently, there was no effort made to balance the performance across these classes. As such, one should consider refinement with additional sequences to perform well on non-B sequences.", "## How to use\r\n\r\nThis model is able to predict the likely bodysite from a V3 sequence.\r\nThis may be use for surveillance of cells that are emerging from latent reservoirs.\r\nRemember, a sequence can come from multiple sites, they are not mutually exclusive.", "## Training Data\r\n\r\nThis model was trained using the damlab/HIV_V3_bodysite dataset using the 0th fold. The dataset consists of 5510 sequences (approximately 35 tokens each) extracted from the Los Alamos HIV Sequence database.", "## Training Procedure", "### Preprocessing\r\n\r\nAs with the rostlab/Prot-bert-bfd model, the rare amino acids U, Z, O, and B were converted to X and spaces were added between each amino acid. All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation.", "### Training\r\n\r\nThe damlab/HIV-BERT model was used as the initial weights for an AutoModelforClassificiation. The model was trained with a learning rate of 1E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset. As this is a multiple classification task (a protein can be found in multiple sites) the loss was calculated as the Binary Cross Entropy for each category. The BCE was weighted by the inverse of the class ratio to balance the weight across the class imbalance.", "## Evaluation Results\r\n\r\n*Need to add*", "## BibTeX Entry and Citation Info" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #dataset-damlab/HIV_V3_bodysite #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for [HIV_V3_bodysite]", "## Table of Contents\r\n- Table of Contents\r\n- Summary\r\n- Model Description\r\n- Intended Uses & Limitations\r\n- How to Use\r\n- Training Data\r\n- Training Procedure\r\n - Preprocessing\r\n - Training\r\n- Evaluation Results\r\n- BibTeX Entry and Citation Info", "## Summary\r\n\r\nThe HIV-BERT-Bodysite-Identification model was trained as a refinement of the HIV-BERT model (insert link) and serves to better predict the location that an HIV V3 loop sample was derived from. HIV-BERT is a model refined from the ProtBert-BFD model (URL to better fulfill HIV-centric tasks. This model was then trained using HIV V3 sequences from the Los Alamos HIV Sequence Database (URL allowing even more precise prediction of body site location than the HIV-BERT model can provide.", "## Model Description\r\n\r\nThe HIV-BERT-Bodysite-Identification model is intended to predict the location as to where an HIV sequence was most likely derived from. Because HIV infects immune cells, it uses these as a means of rapidly spreading throughout the body. Thus, body site identification can help determine where exactly these HIV particles ultimately end up. This would be helpful when attempting to study HIV treatment strategies. When provided with an HIV genomic sequence, the HIV-BERT-Bodysite-Identification model can predict which tissue it was derived from.", "## Intended Uses & Limitations\r\n\r\nThis tool can be used as a predictor of which body site an HIV sample was derived from based on its genomic sequence. It should not be considered a clinical diagnostic tool. \r\n \r\nThis tool was trained using the Los Alamos HIV sequence dataset (URL Due to the sampling nature of this database, it is predominantly composed of subtype B sequences from North America and Europe with only minor contributions of Subtype C, A, and D. Currently, there was no effort made to balance the performance across these classes. As such, one should consider refinement with additional sequences to perform well on non-B sequences.", "## How to use\r\n\r\nThis model is able to predict the likely bodysite from a V3 sequence.\r\nThis may be use for surveillance of cells that are emerging from latent reservoirs.\r\nRemember, a sequence can come from multiple sites, they are not mutually exclusive.", "## Training Data\r\n\r\nThis model was trained using the damlab/HIV_V3_bodysite dataset using the 0th fold. The dataset consists of 5510 sequences (approximately 35 tokens each) extracted from the Los Alamos HIV Sequence database.", "## Training Procedure", "### Preprocessing\r\n\r\nAs with the rostlab/Prot-bert-bfd model, the rare amino acids U, Z, O, and B were converted to X and spaces were added between each amino acid. All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation.", "### Training\r\n\r\nThe damlab/HIV-BERT model was used as the initial weights for an AutoModelforClassificiation. The model was trained with a learning rate of 1E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset. As this is a multiple classification task (a protein can be found in multiple sites) the loss was calculated as the Binary Cross Entropy for each category. The BCE was weighted by the inverse of the class ratio to balance the weight across the class imbalance.", "## Evaluation Results\r\n\r\n*Need to add*", "## BibTeX Entry and Citation Info" ]
text-generation
transformers
#dialogue
{"tags": ["text-generation"]}
danchang11/GPT2-TraditionalChat
null
[ "transformers", "pytorch", "gpt2", "text-generation", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #endpoints_compatible #text-generation-inference #region-us
#dialogue
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #endpoints_compatible #text-generation-inference #region-us \n" ]
null
transformers
# Vision-and-Language Transformer (ViLT), fine-tuned on COCO Vision-and-Language Transformer (ViLT) model fine-tuned on [COCO](https://cocodataset.org/#home). It was introduced in the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT). Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Intended uses & limitations You can use the model for image and text retrieval. ### How to use Here is how to use the model in PyTorch: ``` from transformers import ViltProcessor, ViltForImageAndTextRetrieval import requests from PIL import Image url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"] processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-coco") model = ViltForImageAndTextRetrieval.from_pretrained("dandelin/vilt-b32-finetuned-coco") # prepare inputs encoding = processor(image, text, return_tensors="pt") # forward pass scores = dict() for text in texts: encoding = processor(image, text, return_tensors="pt") outputs = model(**encoding) scores[text] = outputs.logits[0, :].item() ``` ## Training data (to do) ## Training procedure ### Preprocessing (to do) ### Pretraining (to do) ## Evaluation results (to do) ### BibTeX entry and citation info ```bibtex @misc{kim2021vilt, title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision}, author={Wonjae Kim and Bokyung Son and Ildoo Kim}, year={2021}, eprint={2102.03334}, archivePrefix={arXiv}, primaryClass={stat.ML} } ```
{"license": "apache-2.0"}
dandelin/vilt-b32-finetuned-coco
null
[ "transformers", "pytorch", "vilt", "arxiv:2102.03334", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2102.03334" ]
[]
TAGS #transformers #pytorch #vilt #arxiv-2102.03334 #license-apache-2.0 #endpoints_compatible #region-us
# Vision-and-Language Transformer (ViLT), fine-tuned on COCO Vision-and-Language Transformer (ViLT) model fine-tuned on COCO. It was introduced in the paper ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision by Kim et al. and first released in this repository. Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Intended uses & limitations You can use the model for image and text retrieval. ### How to use Here is how to use the model in PyTorch: ## Training data (to do) ## Training procedure ### Preprocessing (to do) ### Pretraining (to do) ## Evaluation results (to do) ### BibTeX entry and citation info
[ "# Vision-and-Language Transformer (ViLT), fine-tuned on COCO\n\nVision-and-Language Transformer (ViLT) model fine-tuned on COCO. It was introduced in the paper ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision by Kim et al. and first released in this repository. \n\nDisclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Intended uses & limitations\n\nYou can use the model for image and text retrieval.", "### How to use\n\nHere is how to use the model in PyTorch:", "## Training data\n\n(to do)", "## Training procedure", "### Preprocessing\n\n(to do)", "### Pretraining\n\n(to do)", "## Evaluation results\n\n(to do)", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #vilt #arxiv-2102.03334 #license-apache-2.0 #endpoints_compatible #region-us \n", "# Vision-and-Language Transformer (ViLT), fine-tuned on COCO\n\nVision-and-Language Transformer (ViLT) model fine-tuned on COCO. It was introduced in the paper ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision by Kim et al. and first released in this repository. \n\nDisclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Intended uses & limitations\n\nYou can use the model for image and text retrieval.", "### How to use\n\nHere is how to use the model in PyTorch:", "## Training data\n\n(to do)", "## Training procedure", "### Preprocessing\n\n(to do)", "### Pretraining\n\n(to do)", "## Evaluation results\n\n(to do)", "### BibTeX entry and citation info" ]
null
transformers
# Vision-and-Language Transformer (ViLT), fine-tuned on Flickr30k Vision-and-Language Transformer (ViLT) model fine-tuned on [Flickr30k](https://arxiv.org/abs/1505.04870#:~:text=The%20Flickr30k%20dataset%20has%20become,for%20sentence%2Dbased%20image%20description.&text=Such%20annotations%20are%20essential%20for,entity%20mentions%20in%20an%20image.). It was introduced in the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT). Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Intended uses & limitations You can use the model for image and text retrieval. ### How to use Here is how to use the model in PyTorch: ``` from transformers import ViltProcessor, ViltForImageAndTextRetrieval import requests from PIL import Image url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"] processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-flickr30k") model = ViltForImageAndTextRetrieval.from_pretrained("dandelin/vilt-b32-finetuned-flickr30k") # prepare inputs encoding = processor(image, text, return_tensors="pt") # forward pass scores = dict() for text in texts: encoding = processor(image, text, return_tensors="pt") outputs = model(**encoding) scores[text] = outputs.logits[0, :].item() ``` ## Training data (to do) ## Training procedure ### Preprocessing (to do) ### Pretraining (to do) ## Evaluation results (to do) ### BibTeX entry and citation info ```bibtex @misc{kim2021vilt, title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision}, author={Wonjae Kim and Bokyung Son and Ildoo Kim}, year={2021}, eprint={2102.03334}, archivePrefix={arXiv}, primaryClass={stat.ML} } ```
{"license": "apache-2.0"}
dandelin/vilt-b32-finetuned-flickr30k
null
[ "transformers", "pytorch", "vilt", "arxiv:1505.04870", "arxiv:2102.03334", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1505.04870", "2102.03334" ]
[]
TAGS #transformers #pytorch #vilt #arxiv-1505.04870 #arxiv-2102.03334 #license-apache-2.0 #endpoints_compatible #region-us
# Vision-and-Language Transformer (ViLT), fine-tuned on Flickr30k Vision-and-Language Transformer (ViLT) model fine-tuned on Flickr30k. It was introduced in the paper ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision by Kim et al. and first released in this repository. Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Intended uses & limitations You can use the model for image and text retrieval. ### How to use Here is how to use the model in PyTorch: ## Training data (to do) ## Training procedure ### Preprocessing (to do) ### Pretraining (to do) ## Evaluation results (to do) ### BibTeX entry and citation info
[ "# Vision-and-Language Transformer (ViLT), fine-tuned on Flickr30k\n\nVision-and-Language Transformer (ViLT) model fine-tuned on Flickr30k. It was introduced in the paper ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision by Kim et al. and first released in this repository. \n\nDisclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Intended uses & limitations\n\nYou can use the model for image and text retrieval.", "### How to use\n\nHere is how to use the model in PyTorch:", "## Training data\n\n(to do)", "## Training procedure", "### Preprocessing\n\n(to do)", "### Pretraining\n\n(to do)", "## Evaluation results\n\n(to do)", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #vilt #arxiv-1505.04870 #arxiv-2102.03334 #license-apache-2.0 #endpoints_compatible #region-us \n", "# Vision-and-Language Transformer (ViLT), fine-tuned on Flickr30k\n\nVision-and-Language Transformer (ViLT) model fine-tuned on Flickr30k. It was introduced in the paper ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision by Kim et al. and first released in this repository. \n\nDisclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Intended uses & limitations\n\nYou can use the model for image and text retrieval.", "### How to use\n\nHere is how to use the model in PyTorch:", "## Training data\n\n(to do)", "## Training procedure", "### Preprocessing\n\n(to do)", "### Pretraining\n\n(to do)", "## Evaluation results\n\n(to do)", "### BibTeX entry and citation info" ]
null
transformers
# Vision-and-Language Transformer (ViLT), fine-tuned on NLVR2 Vision-and-Language Transformer (ViLT) model fine-tuned on [NLVR2](https://lil.nlp.cornell.edu/nlvr/). It was introduced in the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT). Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Intended uses & limitations You can use the model to determine whether a sentence is true or false given 2 images. ### How to use Here is how to use the model in PyTorch: ``` from transformers import ViltProcessor, ViltForImagesAndTextClassification import requests from PIL import Image image1 = Image.open(requests.get("https://lil.nlp.cornell.edu/nlvr/exs/ex0_0.jpg", stream=True).raw) image2 = Image.open(requests.get("https://lil.nlp.cornell.edu/nlvr/exs/ex0_1.jpg", stream=True).raw) text = "The left image contains twice the number of dogs as the right image." processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-nlvr2") model = ViltForImagesAndTextClassification.from_pretrained("dandelin/vilt-b32-finetuned-nlvr2") # prepare inputs encoding = processor([image1, image2], text, return_tensors="pt") # forward pass outputs = model(input_ids=encoding.input_ids, pixel_values=encoding.pixel_values.unsqueeze(0)) logits = outputs.logits idx = logits.argmax(-1).item() print("Predicted answer:", model.config.id2label[idx]) ``` ## Training data (to do) ## Training procedure ### Preprocessing (to do) ### Pretraining (to do) ## Evaluation results (to do) ### BibTeX entry and citation info ```bibtex @misc{kim2021vilt, title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision}, author={Wonjae Kim and Bokyung Son and Ildoo Kim}, year={2021}, eprint={2102.03334}, archivePrefix={arXiv}, primaryClass={stat.ML} } ```
{"license": "apache-2.0"}
dandelin/vilt-b32-finetuned-nlvr2
null
[ "transformers", "pytorch", "vilt", "arxiv:2102.03334", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2102.03334" ]
[]
TAGS #transformers #pytorch #vilt #arxiv-2102.03334 #license-apache-2.0 #endpoints_compatible #has_space #region-us
# Vision-and-Language Transformer (ViLT), fine-tuned on NLVR2 Vision-and-Language Transformer (ViLT) model fine-tuned on NLVR2. It was introduced in the paper ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision by Kim et al. and first released in this repository. Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Intended uses & limitations You can use the model to determine whether a sentence is true or false given 2 images. ### How to use Here is how to use the model in PyTorch: ## Training data (to do) ## Training procedure ### Preprocessing (to do) ### Pretraining (to do) ## Evaluation results (to do) ### BibTeX entry and citation info
[ "# Vision-and-Language Transformer (ViLT), fine-tuned on NLVR2\n\nVision-and-Language Transformer (ViLT) model fine-tuned on NLVR2. It was introduced in the paper ViLT: Vision-and-Language Transformer\nWithout Convolution or Region Supervision by Kim et al. and first released in this repository. \n\nDisclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Intended uses & limitations\n\nYou can use the model to determine whether a sentence is true or false given 2 images.", "### How to use\n\nHere is how to use the model in PyTorch:", "## Training data\n\n(to do)", "## Training procedure", "### Preprocessing\n\n(to do)", "### Pretraining\n\n(to do)", "## Evaluation results\n\n(to do)", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #vilt #arxiv-2102.03334 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# Vision-and-Language Transformer (ViLT), fine-tuned on NLVR2\n\nVision-and-Language Transformer (ViLT) model fine-tuned on NLVR2. It was introduced in the paper ViLT: Vision-and-Language Transformer\nWithout Convolution or Region Supervision by Kim et al. and first released in this repository. \n\nDisclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Intended uses & limitations\n\nYou can use the model to determine whether a sentence is true or false given 2 images.", "### How to use\n\nHere is how to use the model in PyTorch:", "## Training data\n\n(to do)", "## Training procedure", "### Preprocessing\n\n(to do)", "### Pretraining\n\n(to do)", "## Evaluation results\n\n(to do)", "### BibTeX entry and citation info" ]
visual-question-answering
transformers
# Vision-and-Language Transformer (ViLT), fine-tuned on VQAv2 Vision-and-Language Transformer (ViLT) model fine-tuned on [VQAv2](https://visualqa.org/). It was introduced in the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT). Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Intended uses & limitations You can use the raw model for visual question answering. ### How to use Here is how to use this model in PyTorch: ```python from transformers import ViltProcessor, ViltForQuestionAnswering import requests from PIL import Image # prepare image + question url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) text = "How many cats are there?" processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-vqa") model = ViltForQuestionAnswering.from_pretrained("dandelin/vilt-b32-finetuned-vqa") # prepare inputs encoding = processor(image, text, return_tensors="pt") # forward pass outputs = model(**encoding) logits = outputs.logits idx = logits.argmax(-1).item() print("Predicted answer:", model.config.id2label[idx]) ``` ## Training data (to do) ## Training procedure ### Preprocessing (to do) ### Pretraining (to do) ## Evaluation results (to do) ### BibTeX entry and citation info ```bibtex @misc{kim2021vilt, title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision}, author={Wonjae Kim and Bokyung Son and Ildoo Kim}, year={2021}, eprint={2102.03334}, archivePrefix={arXiv}, primaryClass={stat.ML} } ```
{"license": "apache-2.0", "tags": ["visual-question-answering"], "widget": [{"text": "What's the animal doing?", "src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg"}, {"text": "What is on top of the building?", "src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg"}]}
dandelin/vilt-b32-finetuned-vqa
null
[ "transformers", "pytorch", "vilt", "visual-question-answering", "arxiv:2102.03334", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2102.03334" ]
[]
TAGS #transformers #pytorch #vilt #visual-question-answering #arxiv-2102.03334 #license-apache-2.0 #endpoints_compatible #has_space #region-us
# Vision-and-Language Transformer (ViLT), fine-tuned on VQAv2 Vision-and-Language Transformer (ViLT) model fine-tuned on VQAv2. It was introduced in the paper ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision by Kim et al. and first released in this repository. Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Intended uses & limitations You can use the raw model for visual question answering. ### How to use Here is how to use this model in PyTorch: ## Training data (to do) ## Training procedure ### Preprocessing (to do) ### Pretraining (to do) ## Evaluation results (to do) ### BibTeX entry and citation info
[ "# Vision-and-Language Transformer (ViLT), fine-tuned on VQAv2\n\nVision-and-Language Transformer (ViLT) model fine-tuned on VQAv2. It was introduced in the paper ViLT: Vision-and-Language Transformer\nWithout Convolution or Region Supervision by Kim et al. and first released in this repository. \n\nDisclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Intended uses & limitations\n\nYou can use the raw model for visual question answering.", "### How to use\n\nHere is how to use this model in PyTorch:", "## Training data\n\n(to do)", "## Training procedure", "### Preprocessing\n\n(to do)", "### Pretraining\n\n(to do)", "## Evaluation results\n\n(to do)", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #vilt #visual-question-answering #arxiv-2102.03334 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# Vision-and-Language Transformer (ViLT), fine-tuned on VQAv2\n\nVision-and-Language Transformer (ViLT) model fine-tuned on VQAv2. It was introduced in the paper ViLT: Vision-and-Language Transformer\nWithout Convolution or Region Supervision by Kim et al. and first released in this repository. \n\nDisclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Intended uses & limitations\n\nYou can use the raw model for visual question answering.", "### How to use\n\nHere is how to use this model in PyTorch:", "## Training data\n\n(to do)", "## Training procedure", "### Preprocessing\n\n(to do)", "### Pretraining\n\n(to do)", "## Evaluation results\n\n(to do)", "### BibTeX entry and citation info" ]
null
transformers
# Vision-and-Language Transformer (ViLT), pre-trained only Vision-and-Language Transformer (ViLT) model pre-trained on GCC+SBU+COCO+VG (200k steps). It was introduced in the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT). Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description (to do) ## Intended uses & limitations You can use the raw model for visual question answering. ### How to use (to do) ## Training data (to do) ## Training procedure ### Preprocessing (to do) ### Pretraining (to do) ## Evaluation results (to do) ### BibTeX entry and citation info ```bibtex @misc{kim2021vilt, title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision}, author={Wonjae Kim and Bokyung Son and Ildoo Kim}, year={2021}, eprint={2102.03334}, archivePrefix={arXiv}, primaryClass={stat.ML} } ```
{"license": "apache-2.0"}
dandelin/vilt-b32-mlm-itm
null
[ "transformers", "pytorch", "vilt", "arxiv:2102.03334", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2102.03334" ]
[]
TAGS #transformers #pytorch #vilt #arxiv-2102.03334 #license-apache-2.0 #endpoints_compatible #region-us
# Vision-and-Language Transformer (ViLT), pre-trained only Vision-and-Language Transformer (ViLT) model pre-trained on GCC+SBU+COCO+VG (200k steps). It was introduced in the paper ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision by Kim et al. and first released in this repository. Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description (to do) ## Intended uses & limitations You can use the raw model for visual question answering. ### How to use (to do) ## Training data (to do) ## Training procedure ### Preprocessing (to do) ### Pretraining (to do) ## Evaluation results (to do) ### BibTeX entry and citation info
[ "# Vision-and-Language Transformer (ViLT), pre-trained only\n\nVision-and-Language Transformer (ViLT) model pre-trained on GCC+SBU+COCO+VG (200k steps). It was introduced in the paper ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision by Kim et al. and first released in this repository. \n\nDisclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\n(to do)", "## Intended uses & limitations\n\nYou can use the raw model for visual question answering.", "### How to use\n\n(to do)", "## Training data\n\n(to do)", "## Training procedure", "### Preprocessing\n\n(to do)", "### Pretraining\n\n(to do)", "## Evaluation results\n\n(to do)", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #vilt #arxiv-2102.03334 #license-apache-2.0 #endpoints_compatible #region-us \n", "# Vision-and-Language Transformer (ViLT), pre-trained only\n\nVision-and-Language Transformer (ViLT) model pre-trained on GCC+SBU+COCO+VG (200k steps). It was introduced in the paper ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision by Kim et al. and first released in this repository. \n\nDisclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\n(to do)", "## Intended uses & limitations\n\nYou can use the raw model for visual question answering.", "### How to use\n\n(to do)", "## Training data\n\n(to do)", "## Training procedure", "### Preprocessing\n\n(to do)", "### Pretraining\n\n(to do)", "## Evaluation results\n\n(to do)", "### BibTeX entry and citation info" ]
fill-mask
transformers
# Vision-and-Language Transformer (ViLT), pre-trained only Vision-and-Language Transformer (ViLT) model pre-trained on GCC+SBU+COCO+VG (200k steps). It was introduced in the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT). Note: this model only includes the language modeling head. Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Intended uses & limitations You can use the raw model for masked language modeling given an image and a piece of text with [MASK] tokens. ### How to use Here is how to use this model in PyTorch: ``` from transformers import ViltProcessor, ViltForMaskedLM import requests from PIL import Image import re url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) text = "a bunch of [MASK] laying on a [MASK]." processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-mlm") model = ViltForMaskedLM.from_pretrained("dandelin/vilt-b32-mlm") # prepare inputs encoding = processor(image, text, return_tensors="pt") # forward pass outputs = model(**encoding) tl = len(re.findall("\[MASK\]", text)) inferred_token = [text] # gradually fill in the MASK tokens, one by one with torch.no_grad(): for i in range(tl): encoded = processor.tokenizer(inferred_token) input_ids = torch.tensor(encoded.input_ids).to(device) encoded = encoded["input_ids"][0][1:-1] outputs = model(input_ids=input_ids, pixel_values=pixel_values) mlm_logits = outputs.logits[0] # shape (seq_len, vocab_size) # only take into account text features (minus CLS and SEP token) mlm_logits = mlm_logits[1 : input_ids.shape[1] - 1, :] mlm_values, mlm_ids = mlm_logits.softmax(dim=-1).max(dim=-1) # only take into account text mlm_values[torch.tensor(encoded) != 103] = 0 select = mlm_values.argmax().item() encoded[select] = mlm_ids[select].item() inferred_token = [processor.decode(encoded)] selected_token = "" encoded = processor.tokenizer(inferred_token) processor.decode(encoded.input_ids[0], skip_special_tokens=True) ``` ## Training data (to do) ## Training procedure ### Preprocessing (to do) ### Pretraining (to do) ## Evaluation results (to do) ### BibTeX entry and citation info ```bibtex @misc{kim2021vilt, title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision}, author={Wonjae Kim and Bokyung Son and Ildoo Kim}, year={2021}, eprint={2102.03334}, archivePrefix={arXiv}, primaryClass={stat.ML} } ```
{"license": "apache-2.0"}
dandelin/vilt-b32-mlm
null
[ "transformers", "pytorch", "vilt", "fill-mask", "arxiv:2102.03334", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2102.03334" ]
[]
TAGS #transformers #pytorch #vilt #fill-mask #arxiv-2102.03334 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# Vision-and-Language Transformer (ViLT), pre-trained only Vision-and-Language Transformer (ViLT) model pre-trained on GCC+SBU+COCO+VG (200k steps). It was introduced in the paper ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision by Kim et al. and first released in this repository. Note: this model only includes the language modeling head. Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Intended uses & limitations You can use the raw model for masked language modeling given an image and a piece of text with [MASK] tokens. ### How to use Here is how to use this model in PyTorch: ## Training data (to do) ## Training procedure ### Preprocessing (to do) ### Pretraining (to do) ## Evaluation results (to do) ### BibTeX entry and citation info
[ "# Vision-and-Language Transformer (ViLT), pre-trained only\n\nVision-and-Language Transformer (ViLT) model pre-trained on GCC+SBU+COCO+VG (200k steps). It was introduced in the paper ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision by Kim et al. and first released in this repository. Note: this model only includes the language modeling head.\n\nDisclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Intended uses & limitations\n\nYou can use the raw model for masked language modeling given an image and a piece of text with [MASK] tokens.", "### How to use\n\nHere is how to use this model in PyTorch:", "## Training data\n\n(to do)", "## Training procedure", "### Preprocessing\n\n(to do)", "### Pretraining\n\n(to do)", "## Evaluation results\n\n(to do)", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #vilt #fill-mask #arxiv-2102.03334 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Vision-and-Language Transformer (ViLT), pre-trained only\n\nVision-and-Language Transformer (ViLT) model pre-trained on GCC+SBU+COCO+VG (200k steps). It was introduced in the paper ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision by Kim et al. and first released in this repository. Note: this model only includes the language modeling head.\n\nDisclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Intended uses & limitations\n\nYou can use the raw model for masked language modeling given an image and a piece of text with [MASK] tokens.", "### How to use\n\nHere is how to use this model in PyTorch:", "## Training data\n\n(to do)", "## Training procedure", "### Preprocessing\n\n(to do)", "### Pretraining\n\n(to do)", "## Evaluation results\n\n(to do)", "### BibTeX entry and citation info" ]
null
transformers
# GPT-2 Fine-tuning in Vietnamese Wikipedia ## Model description This is a Vietnamese GPT-2 model which is finetuned on the [Latest pages articles of Vietnamese Wikipedia](https://dumps.wikimedia.org/viwiki/latest/viwiki-latest-pages-articles.xml.bz2). ## Dataset The dataset is about 800MB, includes many articles from Wikipedia. ## How to use You can use this model to: - Tokenize Vietnamese sentences with GPT2Tokenizer. - Generate text seems like a Wikipedia article. - Finetune it to other downstream tasks. Here is how to use the model to generate text in Pytorch: ```python import torch from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained('danghuy1999/gpt2-viwiki') model = GPT2LMHeadModel.from_pretrained('danghuy1999/gpt2-viwiki').to('cuda') text = "Albert Einstein là nhà vật lý học tạo ra thuyết lượng tử" input_ids = tokenizer.encode(text, return_tensors='pt').to('cuda') max_length = 100 sample_outputs = model.generate(input_ids,pad_token_id=tokenizer.eos_token_id, do_sample=True, max_length=max_length, min_length=max_length, top_k=40, num_beams=5, early_stopping=True, no_repeat_ngram_size=2, num_return_sequences=3) for i, sample_output in enumerate(sample_outputs): print(">> Generated text {}\n\n{}".format(i+1, tokenizer.decode(sample_output.tolist()))) print('\n---') ``` And the results are: ```bash >> Generated text 1 Albert Einstein là nhà vật lý học tạo ra thuyết lượng tử. Mặc dù thuyết tương đối tổng quát không được áp dụng rộng rãi trong nhiều lĩnh vực khác nhau, nhưng các nhà lý thuyết đã đưa ra khái niệm rộng hơn về tính chất của vật chất. Một trong những nghiên cứu của Albert Einstein về sự tồn tại của hệ quy chiếu quán tính, ông đã đề xuất rằng một lực hấp dẫn có thể có khối lượng bằng năng lượng của nó. Tuy nhiên, những người cho rằng --- >> Generated text 2 Albert Einstein là nhà vật lý học tạo ra thuyết lượng tử. Tuy nhiên, thuyết tương đối hẹp không phải là lý thuyết của Einstein. Cho đến tận cuối thế kỷ 19, Albert Einstein đã chứng minh được sự tồn tại của lực hấp dẫn trong một số trường hợp đặc biệt. Năm 1915, ông đưa ra khái niệm "khối lượng" để miêu tả chuyển động lượng của một hạt bằng khối lượng nghỉ của nó. Ông cho rằng năng lượng "m" là một thành phần của --- >> Generated text 3 Albert Einstein là nhà vật lý học tạo ra thuyết lượng tử. Tuy nhiên, thuyết tương đối hẹp không được chấp nhận rộng rãi bởi các nhà lý thuyết. Một trong những nghiên cứu của Einstein về tính chất của lực hấp dẫn là vào năm 1905, ông đã đưa ra một khái niệm về lực học. Ông đã phát biểu rằng nếu một hạt mang điện tích dương, nó có thể chuyển đổi năng lượng của nó thành các hạt khác. Năm 1915, Arthur Eddington phát minh ra --- ``` You can do the same with **Tensorflow** by using the model **TFGPT2Tokenizer** instead.
{"language": "vi", "license": "mit", "tags": ["gpt2-viwiki"]}
danghuy1999/gpt2-viwiki
null
[ "transformers", "pytorch", "tf", "gpt2", "gpt2-viwiki", "vi", "license:mit", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "vi" ]
TAGS #transformers #pytorch #tf #gpt2 #gpt2-viwiki #vi #license-mit #endpoints_compatible #text-generation-inference #region-us
# GPT-2 Fine-tuning in Vietnamese Wikipedia ## Model description This is a Vietnamese GPT-2 model which is finetuned on the Latest pages articles of Vietnamese Wikipedia. ## Dataset The dataset is about 800MB, includes many articles from Wikipedia. ## How to use You can use this model to: - Tokenize Vietnamese sentences with GPT2Tokenizer. - Generate text seems like a Wikipedia article. - Finetune it to other downstream tasks. Here is how to use the model to generate text in Pytorch: And the results are: You can do the same with Tensorflow by using the model TFGPT2Tokenizer instead.
[ "# GPT-2 Fine-tuning in Vietnamese Wikipedia", "## Model description\n\nThis is a Vietnamese GPT-2 model which is finetuned on the Latest pages articles of Vietnamese Wikipedia.", "## Dataset\n\nThe dataset is about 800MB, includes many articles from Wikipedia.", "## How to use\n\nYou can use this model to:\n\n- Tokenize Vietnamese sentences with GPT2Tokenizer.\n- Generate text seems like a Wikipedia article.\n- Finetune it to other downstream tasks.\n\nHere is how to use the model to generate text in Pytorch:\n\n\n\nAnd the results are:\n\n\n\nYou can do the same with Tensorflow by using the model TFGPT2Tokenizer instead." ]
[ "TAGS\n#transformers #pytorch #tf #gpt2 #gpt2-viwiki #vi #license-mit #endpoints_compatible #text-generation-inference #region-us \n", "# GPT-2 Fine-tuning in Vietnamese Wikipedia", "## Model description\n\nThis is a Vietnamese GPT-2 model which is finetuned on the Latest pages articles of Vietnamese Wikipedia.", "## Dataset\n\nThe dataset is about 800MB, includes many articles from Wikipedia.", "## How to use\n\nYou can use this model to:\n\n- Tokenize Vietnamese sentences with GPT2Tokenizer.\n- Generate text seems like a Wikipedia article.\n- Finetune it to other downstream tasks.\n\nHere is how to use the model to generate text in Pytorch:\n\n\n\nAnd the results are:\n\n\n\nYou can do the same with Tensorflow by using the model TFGPT2Tokenizer instead." ]
sentence-similarity
transformers
## Description: [**Sentence-CamemBERT-Large**](https://huggingface.co/dangvantuan/sentence-camembert-large) is the Embedding Model for French developed by [La Javaness](https://www.lajavaness.com/). The purpose of this embedding model is to represent the content and semantics of a French sentence in a mathematical vector which allows it to understand the meaning of the text-beyond individual words in queries and documents, offering a powerful semantic search. ## Pre-trained sentence embedding models are state-of-the-art of Sentence Embeddings for French. The model is Fine-tuned using pre-trained [facebook/camembert-large](https://huggingface.co/camembert/camembert-large) and [Siamese BERT-Networks with 'sentences-transformers'](https://www.sbert.net/) on dataset [stsb](https://huggingface.co/datasets/stsb_multi_mt/viewer/fr/train) ## Usage The model can be used directly (without a language model) as follows: ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer("dangvantuan/sentence-camembert-large") sentences = ["Un avion est en train de décoller.", "Un homme joue d'une grande flûte.", "Un homme étale du fromage râpé sur une pizza.", "Une personne jette un chat au plafond.", "Une personne est en train de plier un morceau de papier.", ] embeddings = model.encode(sentences) ``` ## Evaluation The model can be evaluated as follows on the French test data of stsb. ```python from sentence_transformers import SentenceTransformer from sentence_transformers.readers import InputExample from datasets import load_dataset def convert_dataset(dataset): dataset_samples=[] for df in dataset: score = float(df['similarity_score'])/5.0 # Normalize score to range 0 ... 1 inp_example = InputExample(texts=[df['sentence1'], df['sentence2']], label=score) dataset_samples.append(inp_example) return dataset_samples # Loading the dataset for evaluation df_dev = load_dataset("stsb_multi_mt", name="fr", split="dev") df_test = load_dataset("stsb_multi_mt", name="fr", split="test") # Convert the dataset for evaluation # For Dev set: dev_samples = convert_dataset(df_dev) val_evaluator = EmbeddingSimilarityEvaluator.from_input_examples(dev_samples, name='sts-dev') val_evaluator(model, output_path="./") # For Test set: test_samples = convert_dataset(df_test) test_evaluator = EmbeddingSimilarityEvaluator.from_input_examples(test_samples, name='sts-test') test_evaluator(model, output_path="./") ``` **Test Result**: The performance is measured using Pearson and Spearman correlation: - On dev | Model | Pearson correlation | Spearman correlation | #params | | ------------- | ------------- | ------------- |------------- | | [dangvantuan/sentence-camembert-large](https://huggingface.co/dangvantuan/sentence-camembert-large)| 88.2 |88.02 | 336M| | [dangvantuan/sentence-camembert-base](https://huggingface.co/dangvantuan/sentence-camembert-base) | 86.73|86.54 | 110M | | [distiluse-base-multilingual-cased](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased) | 79.22 | 79.16|135M | | [GPT-3 (text-davinci-003)](https://platform.openai.com/docs/models) | 85 | NaN|175B | | [GPT-(text-embedding-ada-002)](https://platform.openai.com/docs/models) | 79.75 | 80.44|NaN | - On test | Model | Pearson correlation | Spearman correlation | | ------------- | ------------- | ------------- | | [dangvantuan/sentence-camembert-large](https://huggingface.co/dangvantuan/sentence-camembert-large)| 85.9 | 85.8| | [dangvantuan/sentence-camembert-base](https://huggingface.co/dangvantuan/sentence-camembert-base)| 82.36 | 81.64| | [distiluse-base-multilingual-cased](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased) | 78.62 | 77.48| | [GPT-3 (text-davinci-003)](https://platform.openai.com/docs/models) | 82 | NaN|175B | | [GPT-(text-embedding-ada-002)](https://platform.openai.com/docs/models) | 79.05 | 77.56|NaN | ## Citation @article{reimers2019sentence, title={Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks}, author={Nils Reimers, Iryna Gurevych}, journal={https://arxiv.org/abs/1908.10084}, year={2019} } @article{martin2020camembert, title={CamemBERT: a Tasty French Language Mode}, author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t}, journal={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics}, year={2020} }
{"language": "fr", "license": "apache-2.0", "tags": ["Text", "Sentence Similarity", "Sentence-Embedding", "camembert-large"], "datasets": ["stsb_multi_mt"], "pipeline_tag": "sentence-similarity", "model-index": [{"name": "sentence-camembert-large by Van Tuan DANG", "results": [{"task": {"type": "Text Similarity", "name": "Sentence-Embedding"}, "dataset": {"name": "Text Similarity fr", "type": "stsb_multi_mt", "args": "fr"}, "metrics": [{"type": "Pearson_correlation_coefficient", "value": "xx.xx", "name": "Test Pearson correlation coefficient"}]}]}]}
dangvantuan/sentence-camembert-large
null
[ "transformers", "pytorch", "tf", "safetensors", "camembert", "feature-extraction", "Text", "Sentence Similarity", "Sentence-Embedding", "camembert-large", "sentence-similarity", "fr", "dataset:stsb_multi_mt", "arxiv:1908.10084", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1908.10084" ]
[ "fr" ]
TAGS #transformers #pytorch #tf #safetensors #camembert #feature-extraction #Text #Sentence Similarity #Sentence-Embedding #camembert-large #sentence-similarity #fr #dataset-stsb_multi_mt #arxiv-1908.10084 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
Description: ------------ Sentence-CamemBERT-Large is the Embedding Model for French developed by La Javaness. The purpose of this embedding model is to represent the content and semantics of a French sentence in a mathematical vector which allows it to understand the meaning of the text-beyond individual words in queries and documents, offering a powerful semantic search. Pre-trained sentence embedding models are state-of-the-art of Sentence Embeddings for French. --------------------------------------------------------------------------------------------- The model is Fine-tuned using pre-trained facebook/camembert-large and Siamese BERT-Networks with 'sentences-transformers' on dataset stsb Usage ----- The model can be used directly (without a language model) as follows: Evaluation ---------- The model can be evaluated as follows on the French test data of stsb. Test Result: The performance is measured using Pearson and Spearman correlation: * On dev * On test Model: dangvantuan/sentence-camembert-large, Pearson correlation: 85.9, Spearman correlation: 85.8 Model: dangvantuan/sentence-camembert-base, Pearson correlation: 82.36, Spearman correlation: 81.64 Model: distiluse-base-multilingual-cased, Pearson correlation: 78.62, Spearman correlation: 77.48 Model: GPT-3 (text-davinci-003), Pearson correlation: 82, Spearman correlation: NaN Model: GPT-(text-embedding-ada-002), Pearson correlation: 79.05, Spearman correlation: 77.56 @article{reimers2019sentence, title={Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks}, author={Nils Reimers, Iryna Gurevych}, journal={URL year={2019} } ``` @article{martin2020camembert, title={CamemBERT: a Tasty French Language Mode}, author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t}, journal={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics}, year={2020} } ```
[]
[ "TAGS\n#transformers #pytorch #tf #safetensors #camembert #feature-extraction #Text #Sentence Similarity #Sentence-Embedding #camembert-large #sentence-similarity #fr #dataset-stsb_multi_mt #arxiv-1908.10084 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-en-to-pt This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3295 - Bleu: 5.6807 - Gen Len: 18.6772 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.005 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:| | 0.5787 | 1.0 | 6250 | 0.4928 | 4.1007 | 18.638 | | 0.5089 | 2.0 | 12500 | 0.4463 | 4.3492 | 18.663 | | 0.4652 | 3.0 | 18750 | 0.4215 | 4.68 | 18.6652 | | 0.4353 | 4.0 | 25000 | 0.3980 | 4.8172 | 18.6708 | | 0.4042 | 5.0 | 31250 | 0.3799 | 4.9719 | 18.6514 | | 0.3734 | 6.0 | 37500 | 0.3676 | 5.2226 | 18.6572 | | 0.3396 | 7.0 | 43750 | 0.3513 | 5.2693 | 18.6596 | | 0.308 | 8.0 | 50000 | 0.3400 | 5.4546 | 18.676 | | 0.2767 | 9.0 | 56250 | 0.3331 | 5.5649 | 18.6708 | | 0.2424 | 10.0 | 62500 | 0.3295 | 5.6807 | 18.6772 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["bleu"], "model-index": [{"name": "t5-small-finetuned-en-to-pt", "results": []}]}
danhsf/t5-small-finetuned-en-to-pt
null
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
t5-small-finetuned-en-to-pt =========================== This model is a fine-tuned version of t5-small on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.3295 * Bleu: 5.6807 * Gen Len: 18.6772 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.005 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.18.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.005\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.005\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-en-to-ro-lr_2e-3-fp_false This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset. It achieves the following results on the evaluation set: - Loss: 1.4239 - Bleu: 7.1921 - Gen Len: 18.2611 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.002 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:| | 0.8922 | 0.05 | 2000 | 1.7000 | 6.5274 | 18.2656 | | 0.8621 | 0.1 | 4000 | 1.6409 | 6.6411 | 18.2311 | | 0.8433 | 0.16 | 6000 | 1.6396 | 6.6601 | 18.2596 | | 0.8297 | 0.21 | 8000 | 1.6304 | 6.7129 | 18.2581 | | 0.8006 | 0.26 | 10000 | 1.6022 | 6.6067 | 18.2816 | | 0.793 | 0.31 | 12000 | 1.5999 | 6.551 | 18.2631 | | 0.774 | 0.37 | 14000 | 1.5586 | 6.7105 | 18.2661 | | 0.7618 | 0.42 | 16000 | 1.5769 | 6.7278 | 18.2526 | | 0.7463 | 0.47 | 18000 | 1.5625 | 6.6972 | 18.2201 | | 0.7394 | 0.52 | 20000 | 1.5377 | 6.936 | 18.2491 | | 0.7203 | 0.58 | 22000 | 1.5191 | 7.0205 | 18.2731 | | 0.7158 | 0.63 | 24000 | 1.5055 | 6.835 | 18.2506 | | 0.688 | 0.68 | 26000 | 1.4779 | 7.0534 | 18.2716 | | 0.678 | 0.73 | 28000 | 1.4691 | 6.9735 | 18.2616 | | 0.6677 | 0.79 | 30000 | 1.4702 | 7.0359 | 18.2496 | | 0.6568 | 0.84 | 32000 | 1.4534 | 6.9982 | 18.2556 | | 0.6475 | 0.89 | 34000 | 1.4427 | 7.0443 | 18.2466 | | 0.6395 | 0.94 | 36000 | 1.4265 | 7.1205 | 18.2721 | | 0.6319 | 1.0 | 38000 | 1.4239 | 7.1921 | 18.2611 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt16"], "metrics": ["bleu"], "model-index": [{"name": "t5-small-finetuned-en-to-ro-lr_2e-3-fp_false", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wmt16", "type": "wmt16", "args": "ro-en"}, "metrics": [{"type": "bleu", "value": 7.1921, "name": "Bleu"}]}]}]}
danhsf/t5-small-finetuned-en-to-ro-lr_2e-3-fp_false
null
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:wmt16", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt16 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
t5-small-finetuned-en-to-ro-lr\_2e-3-fp\_false ============================================== This model is a fine-tuned version of t5-small on the wmt16 dataset. It achieves the following results on the evaluation set: * Loss: 1.4239 * Bleu: 7.1921 * Gen Len: 18.2611 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.002 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.12.5 * Pytorch 1.10.0+cu111 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.002\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt16 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.002\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
# Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 457311749 - CO2 Emissions (in grams): 10.148805588432941 ## Validation Metrics - Loss: 1.647747278213501 - Rouge1: 32.4854 - Rouge2: 19.8974 - RougeL: 30.0602 - RougeLsum: 29.9377 - Gen Len: 46.6556 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/danicodes/autonlp-legal-text-summary-457311749 ```
{"language": "unk", "tags": "autonlp", "datasets": ["danicodes/autonlp-data-legal-text-summary"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 10.148805588432941}
danicodes/autonlp-legal-text-summary-457311749
null
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autonlp", "unk", "dataset:danicodes/autonlp-data-legal-text-summary", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "unk" ]
TAGS #transformers #pytorch #pegasus #text2text-generation #autonlp #unk #dataset-danicodes/autonlp-data-legal-text-summary #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
# Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 457311749 - CO2 Emissions (in grams): 10.148805588432941 ## Validation Metrics - Loss: 1.647747278213501 - Rouge1: 32.4854 - Rouge2: 19.8974 - RougeL: 30.0602 - RougeLsum: 29.9377 - Gen Len: 46.6556 ## Usage You can use cURL to access this model:
[ "# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 457311749\n- CO2 Emissions (in grams): 10.148805588432941", "## Validation Metrics\n\n- Loss: 1.647747278213501\n- Rouge1: 32.4854\n- Rouge2: 19.8974\n- RougeL: 30.0602\n- RougeLsum: 29.9377\n- Gen Len: 46.6556", "## Usage\n\nYou can use cURL to access this model:" ]
[ "TAGS\n#transformers #pytorch #pegasus #text2text-generation #autonlp #unk #dataset-danicodes/autonlp-data-legal-text-summary #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 457311749\n- CO2 Emissions (in grams): 10.148805588432941", "## Validation Metrics\n\n- Loss: 1.647747278213501\n- Rouge1: 32.4854\n- Rouge2: 19.8974\n- RougeL: 30.0602\n- RougeLsum: 29.9377\n- Gen Len: 46.6556", "## Usage\n\nYou can use cURL to access this model:" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-fi-to-en This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt19 dataset. It achieves the following results on the evaluation set: - Loss: 3.5235 - Bleu: 1.129 - Gen Len: 17.088 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-----:|:-------:| | 3.414 | 1.0 | 6250 | 3.5235 | 1.129 | 17.088 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt19"], "metrics": ["bleu"], "model-index": [{"name": "t5-small-finetuned-fi-to-en", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wmt19", "type": "wmt19", "args": "fi-en"}, "metrics": [{"type": "bleu", "value": 1.129, "name": "Bleu"}]}]}]}
danielbispov/t5-small-finetuned-fi-to-en
null
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:wmt19", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt19 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
t5-small-finetuned-fi-to-en =========================== This model is a fine-tuned version of t5-small on the wmt19 dataset. It achieves the following results on the evaluation set: * Loss: 3.5235 * Bleu: 1.129 * Gen Len: 17.088 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.12.5 * Pytorch 1.9.1 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.9.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt19 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.9.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bangla_asr This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-bengali-bnm-200](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-bengali-bnm-200) on the None dataset. It achieves the following results on the evaluation set: - Loss: 157.8652 - Wer: 0.4507 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 60 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2601.5363 | 7.46 | 500 | 259.6630 | 0.6863 | | 417.7386 | 14.93 | 1000 | 156.6117 | 0.5275 | | 262.9455 | 22.39 | 1500 | 155.0886 | 0.5006 | | 178.7715 | 29.85 | 2000 | 155.1077 | 0.4840 | | 132.448 | 37.31 | 2500 | 163.8623 | 0.4770 | | 116.3943 | 44.78 | 3000 | 161.5531 | 0.4609 | | 87.1653 | 52.24 | 3500 | 165.6857 | 0.4597 | | 80.5606 | 59.7 | 4000 | 157.8652 | 0.4507 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "model-index": [{"name": "bangla_asr", "results": []}]}
danielbubiola/bangla_asr
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #endpoints_compatible #region-us
bangla\_asr =========== This model is a fine-tuned version of Harveenchadha/vakyansh-wav2vec2-bengali-bnm-200 on the None dataset. It achieves the following results on the evaluation set: * Loss: 157.8652 * Wer: 0.4507 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 1000 * num\_epochs: 60 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.0+cu111 * Datasets 1.13.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 60\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 60\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # daniel_asr This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4565 - Wer: 0.3423 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4909 | 4.0 | 500 | 1.3485 | 0.8887 | | 0.5887 | 8.0 | 1000 | 0.4957 | 0.4641 | | 0.2207 | 12.0 | 1500 | 0.4621 | 0.3971 | | 0.125 | 16.0 | 2000 | 0.4339 | 0.3756 | | 0.0829 | 20.0 | 2500 | 0.4618 | 0.3613 | | 0.0601 | 24.0 | 3000 | 0.4564 | 0.3535 | | 0.0456 | 28.0 | 3500 | 0.4565 | 0.3423 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "daniel_asr", "results": []}]}
danielbubiola/daniel_asr
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
daniel\_asr =========== This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.4565 * Wer: 0.3423 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 32 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 1000 * num\_epochs: 30 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.0+cu111 * Datasets 1.13.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
token-classification
spacy
| Feature | Description | | --- | --- | | **Name** | `en_acnl_electra_pipeline` | | **Version** | `0.0.1` | | **spaCy** | `>=3.1.3,<3.2.0` | | **Default Pipeline** | `transformer`, `tagger`, `parser` | | **Components** | `transformer`, `tagger`, `parser` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | GPL | | **Author** | Daniel Vasić() | ### Label Scheme <details> <summary>View label scheme (87 labels for 2 components)</summary> | Component | Labels | | --- | --- | | **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `VERB`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, ```` | | **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `dative`, `dep`, `det`, `dobj`, `intj`, `mark`, `meta`, `neg`, `nmod`, `npadvmod`, `nummod`, `parataxis`, `pcomp`, `pobj`, `poss`, `preconj`, `predet`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` | </details> ### Accuracy | Type | Score | | --- | --- | | `TAG_ACC` | 97.69 | | `DEP_UAS` | 95.77 | | `DEP_LAS` | 94.52 | | `SENTS_P` | 95.09 | | `SENTS_R` | 94.81 | | `SENTS_F` | 94.95 | | `TRANSFORMER_LOSS` | 6123357.72 | | `TAGGER_LOSS` | 338995.26 | | `PARSER_LOSS` | 4101825.66 |
{"language": ["en"], "tags": ["spacy", "token-classification"]}
danielvasic/en_acnl_electra_pipeline
null
[ "spacy", "token-classification", "en", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #spacy #token-classification #en #model-index #region-us
### Label Scheme View label scheme (87 labels for 2 components) ### Accuracy
[ "### Label Scheme\n\n\n\nView label scheme (87 labels for 2 components)", "### Accuracy" ]
[ "TAGS\n#spacy #token-classification #en #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (87 labels for 2 components)", "### Accuracy" ]
text-classification
spacy
| Feature | Description | | --- | --- | | **Name** | `en_acnl_roberta_pipeline` | | **Version** | `0.0.1` | | **spaCy** | `>=3.1.3,<3.2.0` | | **Default Pipeline** | `transformer`, `tagger`, `parser` | | **Components** | `transformer`, `tagger`, `parser` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | OntoNotes | | **License** | CC BY-SA 4.0 | | **Author** | Daniel Vasić | ### Label Scheme <details> <summary>View label scheme (87 labels for 2 components)</summary> | Component | Labels | | --- | --- | | **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `VERB`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, ```` | | **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `dative`, `dep`, `det`, `dobj`, `intj`, `mark`, `meta`, `neg`, `nmod`, `npadvmod`, `nummod`, `parataxis`, `pcomp`, `pobj`, `poss`, `preconj`, `predet`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` | </details> ### Accuracy | Type | Score | | --- | --- | | `TAG_ACC` | 98.05 | | `DEP_UAS` | 95.98 | | `DEP_LAS` | 94.83 | | `SENTS_P` | 93.80 | | `SENTS_R` | 95.42 | | `SENTS_F` | 94.61 | | `TRANSFORMER_LOSS` | 3784861.59 | | `TAGGER_LOSS` | 698704.80 | | `PARSER_LOSS` | 5540167.00 |
{"language": ["en"], "license": "cc-by-4.0", "library_name": "spacy", "tags": ["spacy", "token-classification"], "datasets": ["conll2012_ontonotesv5"], "metrics": ["f1"], "pipeline_tag": "text-classification"}
danielvasic/en_acnl_roberta_pipeline
null
[ "spacy", "token-classification", "text-classification", "en", "dataset:conll2012_ontonotesv5", "license:cc-by-4.0", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #spacy #token-classification #text-classification #en #dataset-conll2012_ontonotesv5 #license-cc-by-4.0 #model-index #region-us
### Label Scheme View label scheme (87 labels for 2 components) ### Accuracy
[ "### Label Scheme\n\n\n\nView label scheme (87 labels for 2 components)", "### Accuracy" ]
[ "TAGS\n#spacy #token-classification #text-classification #en #dataset-conll2012_ontonotesv5 #license-cc-by-4.0 #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (87 labels for 2 components)", "### Accuracy" ]
token-classification
spacy
| Feature | Description | | --- | --- | | **Name** | `hr_bertic_pipeline` | | **Version** | `0.0.1` | | **spaCy** | `>=3.1.3,<3.2.0` | | **Default Pipeline** | `transformer`, `morphologizer`, `tagger`, `parser` | | **Components** | `transformer`, `morphologizer`, `tagger`, `parser` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (1392 labels for 3 components)</summary> | Component | Labels | | --- | --- | | **`morphologizer`** | `Case=nominative\|Gender=masculine\|Number=singular\|POS=NOUN\|Type=common`, `Case=genitive\|Gender=feminine\|Number=singular\|POS=NOUN\|Type=common`, `Case=locative\|POS=ADP`, `Case=locative\|Gender=neuter\|Number=singular\|POS=PROPN\|Type=proper`, `Case=instrumental\|POS=ADP`, `Case=instrumental\|Gender=neuter\|Number=singular\|POS=NOUN\|Type=common`, `Case=nominative\|Gender=neuter\|Number=singular\|POS=PROPN\|Type=proper`, `Degree=positive\|POS=ADV\|Type=general`, `Number=singular\|POS=VERB\|Person=third\|Type=main\|VForm=present`, `Animate=no\|Case=accusative\|Gender=masculine\|Number=singular\|POS=NOUN\|Type=common`, `Case=locative\|Gender=neuter\|Number=singular\|POS=NOUN\|Type=common`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=genitive\|Gender=feminine\|Number=plural\|POS=NOUN\|Type=common`, `POS=PUNCT`, `POS=PART\|Type=modal`, `Case=locative\|Gender=masculine\|Number=singular\|POS=NOUN\|Type=common`, `POS=SCONJ\|Type=subordinating`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Gender=feminine\|Number=singular\|POS=NOUN\|Type=common`, `Case=nominative\|Gender=feminine\|Number=singular\|POS=PROPN\|Type=proper`, `Case=accusative\|Gender=neuter\|Number=plural\|POS=NOUN\|Type=common`, `Case=accusative\|Number=singular\|POS=PRON\|Type=reflexive`, `Case=genitive\|Gender=neuter\|Number=singular\|POS=NOUN\|Type=common`, `Case=genitive\|Gender=neuter\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `POS=CCONJ\|Type=coordinating`, `Case=genitive\|POS=ADP`, `Case=dative\|Gender=neuter\|Number=singular\|POS=NOUN\|Type=common`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=genitive\|Gender=masculine\|Number=singular\|POS=NOUN\|Type=common`, `Case=nominative\|Gender=masculine\|Number=plural\|POS=NOUN\|Type=common`, `Number=plural\|POS=VERB\|Person=third\|Type=main\|VForm=present`, `Number=singular\|POS=AUX\|Person=third\|Type=auxiliary\|VForm=present`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Gender=masculine\|Number=singular\|POS=DET\|Type=indefinite`, `Case=accusative\|POS=ADP`, `Case=accusative\|Gender=feminine\|Number=singular\|POS=NOUN\|Type=common`, `Case=nominative\|Definiteness=no\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Gender=neuter\|POS=PRON\|Person=third\|Type=indefinite`, `Animate=no\|Case=accusative\|Definiteness=no\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=accusative\|Gender=neuter\|Number=singular\|POS=NOUN\|Type=common`, `Case=nominative\|Gender=masculine\|Number=plural\|POS=DET\|Type=indefinite`, `POS=VERB\|Type=main\|VForm=infinitive`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=accusative\|Gender=feminine\|Number=plural\|POS=NOUN\|Type=common`, `Case=nominative\|Form=letter\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=ordinal`, `POS=PART\|Type=negative`, `Case=accusative\|Gender=neuter\|POS=PRON\|Person=third\|Type=indefinite`, `Case=instrumental\|Gender=masculine\|Number=singular\|POS=NOUN\|Type=common`, `Degree=comparative\|POS=ADV\|Type=general`, `Case=nominative\|Gender=masculine\|Number=singular\|POS=PROPN\|Type=proper`, `Case=nominative\|Form=letter\|Gender=masculine\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=nominative\|Gender=masculine\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=nominative\|Gender=masculine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Gender=masculine\|Number=singular\|POS=VERB\|Type=main\|VForm=participle`, `Case=locative\|Gender=feminine\|Number=singular\|POS=NOUN\|Type=common`, `Case=nominative\|Number=singular\|POS=PRON\|Person=first\|Type=personal`, `Form=digit\|POS=ADJ\|Type=ordinal`, `Number=singular\|POS=AUX\|Person=first\|Type=auxiliary\|VForm=present`, `Number=plural\|POS=AUX\|Person=third\|Type=auxiliary\|VForm=present`, `Case=accusative\|Number=plural\|POS=PRON\|Person=first\|Type=personal`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=nominative\|Gender=feminine\|Number=plural\|POS=NOUN\|Type=common`, `Gender=feminine\|Number=plural\|POS=VERB\|Type=main\|VForm=participle`, `Animate=no\|Case=accusative\|Gender=masculine\|Number=singular\|POS=DET\|Type=reflexive`, `Case=nominative\|Gender=neuter\|Number=singular\|POS=DET\|Type=demonstrative`, `Gender=neuter\|Number=singular\|POS=VERB\|Type=main\|VForm=participle`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=participle`, `Case=accusative\|Gender=feminine\|Number=singular\|POS=DET\|Type=indefinite`, `Degree=superlative\|POS=ADV\|Type=general`, `Case=accusative\|Gender=feminine\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Animate=no\|Case=accusative\|Gender=masculine\|Number=singular\|POS=PROPN\|Type=proper`, `Case=locative\|Gender=masculine\|Number=plural\|POS=DET\|Type=indefinite`, `Case=locative\|Gender=masculine\|Number=plural\|POS=NOUN\|Type=common`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=participle`, `Case=nominative\|Gender=masculine\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Animate=no\|Case=accusative\|Gender=masculine\|Number=singular\|POS=DET\|Type=indefinite`, `Case=nominative\|Definiteness=no\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=genitive\|Gender=masculine\|Number=plural\|POS=PROPN\|Type=proper`, `Case=accusative\|Gender=masculine\|Number=plural\|POS=DET\|Type=reflexive`, `Case=accusative\|Gender=masculine\|Number=plural\|POS=NOUN\|Type=common`, `Case=genitive\|Gender=neuter\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=nominative\|Gender=feminine\|Number=plural\|POS=DET\|Type=indefinite`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Case=genitive\|Gender=masculine\|Number=plural\|POS=NOUN\|Type=common`, `Number=plural\|POS=VERB\|Person=first\|Type=main\|VForm=present`, `Case=nominative\|Gender=neuter\|Number=singular\|POS=NOUN\|Type=common`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Gender=feminine\|Number=plural\|POS=AUX\|Type=auxiliary\|VForm=participle`, `Gender=masculine\|Number=singular\|POS=AUX\|Type=auxiliary\|VForm=participle`, `Gender=masculine\|Number=plural\|POS=VERB\|Type=main\|VForm=participle`, `Form=digit\|POS=NUM\|Type=cardinal`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=participle`, `Case=accusative\|Gender=masculine\|Number=plural\|POS=DET\|Type=indefinite`, `Gender=feminine\|Number=singular\|POS=VERB\|Type=main\|VForm=participle`, `Case=accusative\|Form=letter\|Gender=feminine\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=locative\|Gender=feminine\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=instrumental\|Gender=neuter\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=locative\|Gender=feminine\|Number=plural\|POS=NOUN\|Type=common`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Case=accusative\|Definiteness=yes\|Degree=comparative\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=genitive\|Gender=feminine\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=nominative\|Definiteness=no\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=participle`, `Animate=no\|Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Gender=neuter\|Number=singular\|POS=AUX\|Type=auxiliary\|VForm=participle`, `Case=locative\|Gender=neuter\|Number=plural\|POS=NOUN\|Type=common`, `Case=nominative\|Gender=neuter\|Number=plural\|POS=DET\|Type=indefinite`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=participle`, `Case=nominative\|Gender=neuter\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=nominative\|Gender=neuter\|Number=plural\|POS=NOUN\|Type=common`, `Case=genitive\|Number=plural\|POS=PRON\|Person=third\|Type=personal`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=participle`, `Number=plural\|POS=AUX\|Person=third\|Type=auxiliary\|VForm=aorist`, `Case=nominative\|Definiteness=yes\|Degree=comparative\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Case=genitive\|Definiteness=yes\|Degree=comparative\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Case=accusative\|Definiteness=yes\|Degree=comparative\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Gender=masculine\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=locative\|Gender=masculine\|Number=singular\|POS=DET\|Type=indefinite`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Animate=yes\|Case=accusative\|Gender=masculine\|Number=singular\|POS=NOUN\|Type=common`, `Animate=yes\|Case=accusative\|Gender=masculine\|Number=singular\|POS=PROPN\|Type=proper`, `Case=dative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=dative\|Gender=masculine\|Number=singular\|POS=NOUN\|Type=common`, `Case=locative\|Gender=feminine\|Number=singular\|POS=PROPN\|Type=proper`, `Case=dative\|Gender=masculine\|Number=singular\|POS=PROPN\|Type=proper`, `Case=locative\|Gender=neuter\|Number=plural\|POS=DET\|Person=third\|Type=possessive`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=general`, `Number=singular\|POS=AUX\|Person=third\|Type=auxiliary\|VForm=aorist`, `POS=X`, `Case=genitive\|Form=letter\|POS=NUM\|Type=cardinal`, `Case=genitive\|Form=letter\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=ordinal`, `Case=genitive\|Gender=neuter\|Number=plural\|POS=NOUN\|Type=common`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Form=letter\|Gender=feminine\|POS=NUM\|Type=cardinal`, `Form=letter\|POS=NUM\|Type=cardinal`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=participle`, `Case=locative\|Gender=masculine\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Case=genitive\|Gender=feminine\|Number=singular\|POS=PROPN\|Type=proper`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=genitive\|Definiteness=yes\|Degree=superlative\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Gender=feminine\|Number=singular\|POS=DET\|Type=indefinite`, `Case=accusative\|Gender=neuter\|Number=singular\|POS=DET\|Type=indefinite`, `Case=nominative\|Gender=neuter\|Number=singular\|POS=DET\|Type=indefinite`, `Case=nominative\|Gender=masculine\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Case=genitive\|Gender=masculine\|Number=singular\|POS=PROPN\|Type=proper`, `Case=nominative\|Form=letter\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=instrumental\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=genitive\|Gender=feminine\|Number=plural\|POS=DET\|Type=reflexive`, `POS=X\|Type=foreign`, `Number=plural\|POS=VERB\|Person=second\|Type=main\|VForm=present`, `POS=PART\|Type=interrogative`, `Case=locative\|Gender=feminine\|Number=singular\|POS=DET\|Type=reflexive`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Case=dative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=dative\|Gender=feminine\|Number=singular\|POS=NOUN\|Type=common`, `POS=ADV\|Type=participle`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Case=locative\|Definiteness=yes\|Degree=superlative\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Case=genitive\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Number=singular\|POS=VERB\|Person=first\|Type=main\|VForm=present`, `Case=locative\|Gender=masculine\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=instrumental\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=instrumental\|Gender=feminine\|Number=plural\|POS=NOUN\|Type=common`, `Case=dative\|Gender=masculine\|Number=plural\|POS=NOUN\|Type=common`, `Case=instrumental\|Gender=masculine\|Number=singular\|POS=PROPN\|Type=proper`, `Animate=yes\|Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=dative\|Gender=feminine\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Animate=yes\|Case=accusative\|Form=letter\|Gender=masculine\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=nominative\|Number=plural\|POS=PRON\|Person=first\|Type=personal`, `Number=plural\|POS=AUX\|Person=first\|Type=auxiliary\|VForm=present`, `POS=AUX\|Type=auxiliary\|VForm=infinitive`, `Case=locative\|Gender=masculine\|Number=singular\|POS=PROPN\|Type=proper`, `Case=nominative\|Definiteness=yes\|Degree=superlative\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=genitive\|Gender=masculine\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=instrumental\|Gender=feminine\|Number=singular\|POS=NOUN\|Type=common`, `Gender=feminine\|Number=singular\|POS=AUX\|Type=auxiliary\|VForm=participle`, `Case=instrumental\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=accusative\|Gender=feminine\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Case=accusative\|Gender=masculine\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Case=nominative\|Gender=neuter\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Case=instrumental\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Case=instrumental\|Gender=masculine\|Number=plural\|POS=NOUN\|Type=common`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=possessive`, `Animate=no\|Case=accusative\|Form=letter\|Gender=masculine\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=accusative\|Gender=feminine\|Number=singular\|POS=PROPN\|Type=proper`, `Case=genitive\|Gender=neuter\|Number=singular\|POS=PROPN\|Type=proper`, `Case=accusative\|Gender=neuter\|Number=plural\|POS=DET\|Person=third\|Type=possessive`, `Case=dative\|Gender=feminine\|Number=singular\|POS=PROPN\|Type=proper`, `Case=accusative\|Gender=neuter\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=nominative\|Form=letter\|Gender=feminine\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=genitive\|Definiteness=yes\|Degree=superlative\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=nominative\|Gender=feminine\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Case=instrumental\|Definiteness=yes\|Degree=comparative\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=general`, `Case=accusative\|Gender=feminine\|Number=singular\|POS=DET\|Type=reflexive`, `Case=nominative\|Gender=masculine\|Number=plural\|POS=PRON\|Person=third\|Type=personal`, `Case=dative\|Number=plural\|POS=PRON\|Person=first\|Type=personal`, `Case=nominative\|Gender=neuter\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Case=genitive\|Gender=masculine\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Case=accusative\|Gender=neuter\|Number=singular\|POS=DET\|Type=reflexive`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=general`, `Case=locative\|Gender=neuter\|Number=plural\|POS=DET\|Type=reflexive`, `Case=nominative\|Gender=masculine\|POS=PRON\|Person=third\|Type=indefinite`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=general`, `Case=genitive\|Gender=feminine\|Number=singular\|POS=DET\|Type=indefinite`, `Number=plural\|POS=AUX\|Person=first\|Type=auxiliary\|VForm=aorist`, `Animate=no\|Case=accusative\|Gender=masculine\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=dative\|Number=singular\|POS=PRON\|Person=first\|Type=personal`, `Case=nominative\|Form=letter\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=locative\|Gender=masculine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=dative\|Definiteness=yes\|Degree=comparative\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `POS=NOUN`, `Case=vocative\|Gender=masculine\|Number=singular\|POS=NOUN\|Type=common`, `Case=instrumental\|Gender=masculine\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=locative\|Gender=neuter\|Number=singular\|POS=DET\|Type=indefinite`, `Case=accusative\|Gender=masculine\|Number=plural\|POS=DET\|Person=third\|Type=possessive`, `Case=instrumental\|Gender=feminine\|Number=singular\|POS=PROPN\|Type=proper`, `Case=accusative\|Gender=feminine\|Number=plural\|POS=DET\|Type=indefinite`, `Case=accusative\|Form=letter\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=ordinal`, `Case=genitive\|Gender=feminine\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=genitive\|Gender=masculine\|Number=plural\|POS=DET\|Person=first\|Type=possessive`, `Case=locative\|Gender=neuter\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=locative\|Number=plural\|POS=PRON\|Person=first\|Type=personal`, `Case=locative\|Gender=masculine\|Number=plural\|POS=DET\|Person=first\|Type=possessive`, `Case=nominative\|Gender=feminine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=nominative\|Form=letter\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=ordinal`, `Case=dative\|Gender=masculine\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=accusative\|Gender=neuter\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=locative\|Gender=feminine\|Number=singular\|POS=DET\|Type=indefinite`, `Case=dative\|Gender=feminine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=nominative\|Gender=neuter\|POS=PRON\|Person=third\|Type=interrogative`, `Case=nominative\|Number=plural\|POS=PRON\|Person=second\|Type=personal`, `Case=genitive\|Gender=masculine\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=genitive\|Gender=masculine\|Number=singular\|POS=DET\|Type=reflexive`, `Case=locative\|Gender=feminine\|Number=plural\|POS=DET\|Type=indefinite`, `Number=plural\|POS=AUX\|Person=second\|Type=auxiliary\|VForm=present`, `Case=instrumental\|Gender=masculine\|Number=singular\|POS=DET\|Type=reflexive`, `Case=dative\|Gender=feminine\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=dative\|Gender=feminine\|Number=plural\|POS=NOUN\|Type=common`, `Number=singular\|POS=AUX\|Person=first\|Type=auxiliary\|VForm=aorist`, `Case=locative\|Gender=masculine\|Number=singular\|POS=DET\|Type=reflexive`, `Case=accusative\|Number=plural\|POS=PRON\|Person=third\|Type=personal`, `Case=genitive\|Gender=feminine\|Number=plural\|POS=DET\|Person=first\|Type=possessive`, `Case=nominative\|Definiteness=yes\|Degree=comparative\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=participle`, `Case=nominative\|Gender=neuter\|Number=singular\|POS=DET\|Type=interrogative`, `Case=nominative\|Gender=neuter\|Number=singular\|POS=DET\|Person=second\|Type=possessive`, `Case=locative\|Gender=feminine\|Number=singular\|POS=DET\|Type=interrogative`, `Case=locative\|Gender=neuter\|Number=singular\|POS=DET\|Person=second\|Type=possessive`, `Case=dative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Animate=no\|Case=accusative\|Form=letter\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=instrumental\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=participle`, `Case=nominative\|Gender=feminine\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=dative\|Gender=masculine\|POS=PRON\|Person=third\|Type=indefinite`, `Case=instrumental\|Gender=neuter\|POS=PRON\|Person=third\|Type=indefinite`, `Case=dative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Case=dative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=participle`, `Case=instrumental\|Gender=feminine\|Number=singular\|POS=DET\|Type=indefinite`, `Case=accusative\|Gender=neuter\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Animate=yes\|Case=accusative\|Gender=masculine\|Number=singular\|POS=DET\|Type=indefinite`, `Case=dative\|POS=ADP`, `Case=instrumental\|Number=singular\|POS=PRON\|Type=reflexive`, `Case=genitive\|Gender=neuter\|Number=plural\|POS=DET\|Type=indefinite`, `Case=locative\|Gender=neuter\|Number=plural\|POS=DET\|Type=indefinite`, `Case=locative\|Gender=masculine\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Gender=neuter\|Number=plural\|POS=VERB\|Type=main\|VForm=participle`, `Case=nominative\|Form=letter\|Gender=neuter\|POS=NUM\|Type=cardinal`, `Case=genitive\|Definiteness=yes\|Degree=comparative\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=general`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=participle`, `Case=locative\|Gender=feminine\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Case=instrumental\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=possessive`, `Case=dative\|Form=letter\|Gender=feminine\|POS=NUM\|Type=special`, `Case=accusative\|Form=letter\|Gender=feminine\|POS=NUM\|Type=cardinal`, `Case=nominative\|Gender=feminine\|Number=plural\|POS=PROPN\|Type=proper`, `Case=instrumental\|Gender=feminine\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=genitive\|Gender=feminine\|Number=plural\|POS=DET\|Type=indefinite`, `Form=letter\|POS=NUM\|Type=special`, `Case=accusative\|Form=letter\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=ordinal`, `Case=instrumental\|Gender=masculine\|Number=plural\|POS=DET\|Type=indefinite`, `Case=genitive\|Form=letter\|Gender=feminine\|POS=NUM\|Type=special`, `Case=genitive\|Form=letter\|Gender=feminine\|POS=NUM\|Type=cardinal`, `Animate=no\|Case=accusative\|Definiteness=yes\|Degree=comparative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Gender=masculine\|Number=plural\|POS=DET\|Type=interrogative`, `Case=nominative\|Form=letter\|Gender=feminine\|POS=NUM\|Type=special`, `Case=instrumental\|Gender=feminine\|Number=plural\|POS=DET\|Type=indefinite`, `Case=locative\|Gender=neuter\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Case=nominative\|Gender=feminine\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=dative\|Number=plural\|POS=PRON\|Person=third\|Type=personal`, `Case=accusative\|Gender=neuter\|Number=singular\|POS=PROPN\|Type=proper`, `Case=genitive\|Definiteness=yes\|Degree=comparative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=instrumental\|Gender=neuter\|Number=singular\|POS=PROPN\|Type=proper`, `Case=nominative\|Gender=masculine\|Number=plural\|POS=PROPN\|Type=proper`, `Case=dative\|Gender=masculine\|Number=plural\|POS=DET\|Person=third\|Type=possessive`, `Animate=no\|Case=accusative\|Gender=masculine\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Case=genitive\|Form=letter\|Gender=masculine\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=locative\|Gender=neuter\|POS=PRON\|Person=third\|Type=indefinite`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=participle`, `Case=instrumental\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=participle`, `Case=genitive\|Gender=masculine\|Number=singular\|POS=DET\|Type=indefinite`, `Case=genitive\|Gender=feminine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=nominative\|Definiteness=yes\|Degree=comparative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=possessive`, `Case=instrumental\|Gender=neuter\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=instrumental\|Gender=neuter\|Number=plural\|POS=NOUN\|Type=common`, `Case=genitive\|Gender=masculine\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=participle`, `Case=dative\|Gender=feminine\|Number=plural\|POS=DET\|Person=first\|Type=possessive`, `Case=dative\|Gender=feminine\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=participle`, `Case=genitive\|Definiteness=yes\|Degree=comparative\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Case=instrumental\|Form=letter\|Gender=masculine\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=instrumental\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Case=dative\|Gender=neuter\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=nominative\|Gender=feminine\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=accusative\|Definiteness=yes\|Degree=comparative\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Case=dative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=general`, `Case=dative\|Gender=neuter\|Number=plural\|POS=NOUN\|Type=common`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=participle`, `Case=nominative\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Gender=masculine\|Number=plural\|POS=AUX\|Type=auxiliary\|VForm=participle`, `Case=genitive\|Form=letter\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=ordinal`, `Case=instrumental\|Form=letter\|Gender=feminine\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=accusative\|Gender=feminine\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=dative\|Gender=masculine\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=genitive\|Form=letter\|Gender=neuter\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=nominative\|Form=letter\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=locative\|Gender=feminine\|Number=plural\|POS=DET\|Person=third\|Type=possessive`, `Case=accusative\|Gender=feminine\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=locative\|Gender=feminine\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=instrumental\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=participle`, `Case=genitive\|Gender=feminine\|Number=plural\|POS=DET\|Person=third\|Type=possessive`, `Case=dative\|Form=letter\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=dative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `POS=PROPN`, `Case=instrumental\|Gender=feminine\|Number=singular\|POS=DET\|Type=reflexive`, `Case=instrumental\|Definiteness=yes\|Degree=superlative\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=genitive\|Form=letter\|Gender=masculine\|POS=NUM\|Type=special`, `Case=instrumental\|Gender=neuter\|Number=singular\|POS=DET\|Type=indefinite`, `Animate=no\|Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=participle`, `Case=locative\|Form=letter\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=ordinal`, `Case=accusative\|Form=letter\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=genitive\|Form=letter\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=accusative\|Gender=neuter\|Number=plural\|POS=DET\|Type=indefinite`, `Case=dative\|Gender=masculine\|Number=plural\|POS=PROPN\|Type=proper`, `Case=locative\|Form=letter\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=ordinal`, `Animate=no\|Case=accusative\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Gender=feminine\|Number=plural\|POS=PRON\|Person=third\|Type=personal`, `Case=accusative\|Gender=feminine\|Number=plural\|POS=DET\|Type=reflexive`, `Gender=neuter\|Number=plural\|POS=AUX\|Type=auxiliary\|VForm=participle`, `Case=instrumental\|Definiteness=yes\|Degree=comparative\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=instrumental\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=general`, `Case=locative\|Form=letter\|Gender=masculine\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=possessive`, `Number=plural\|POS=VERB\|Person=second\|Type=main\|VForm=imperative`, `Case=instrumental\|Gender=masculine\|Number=singular\|POS=DET\|Type=indefinite`, `Case=genitive\|Gender=neuter\|Number=plural\|POS=DET\|Person=third\|Type=possessive`, `Case=genitive\|Gender=masculine\|Number=plural\|POS=DET\|Type=indefinite`, `Case=instrumental\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=participle`, `Case=instrumental\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=participle`, `Case=locative\|Form=letter\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=genitive\|Gender=feminine\|Number=plural\|POS=PROPN\|Type=proper`, `Case=genitive\|Gender=feminine\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Case=instrumental\|Form=letter\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=instrumental\|Number=plural\|POS=PRON\|Person=third\|Type=personal`, `Case=accusative\|Gender=masculine\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=dative\|Gender=masculine\|Number=plural\|POS=DET\|Type=indefinite`, `Case=dative\|Gender=neuter\|Number=singular\|POS=PROPN\|Type=proper`, `Case=dative\|Gender=masculine\|Number=singular\|POS=DET\|Type=indefinite`, `Case=dative\|Gender=neuter\|Number=singular\|POS=DET\|Type=indefinite`, `Case=genitive\|Gender=masculine\|Number=plural\|POS=DET\|Person=third\|Type=possessive`, `Form=digit\|POS=SYM\|Type=special`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=participle`, `Case=genitive\|Gender=feminine\|Number=singular\|POS=DET\|Type=reflexive`, `Case=dative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=possessive`, `Animate=yes\|Case=accusative\|Gender=masculine\|Number=singular\|POS=DET\|Type=reflexive`, `Case=nominative\|Definiteness=yes\|Degree=comparative\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=accusative\|Form=letter\|Gender=masculine\|POS=NUM\|Type=cardinal`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=participle`, `Case=locative\|Gender=masculine\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=participle`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=participle`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=participle`, `Case=instrumental\|Gender=masculine\|Number=plural\|POS=PROPN\|Type=proper`, `Case=nominative\|Form=letter\|Gender=neuter\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=instrumental\|Gender=masculine\|Number=plural\|POS=DET\|Type=reflexive`, `Case=genitive\|Gender=masculine\|Number=plural\|POS=DET\|Type=reflexive`, `Case=accusative\|Gender=masculine\|Number=plural\|POS=PROPN\|Type=proper`, `Case=locative\|Definiteness=yes\|Degree=superlative\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=genitive\|Gender=neuter\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Form=digit\|POS=NUM\|Type=special`, `Case=genitive\|Form=letter\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=ordinal`, `Case=instrumental\|Gender=neuter\|Number=plural\|POS=DET\|Type=indefinite`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=possessive`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=locative\|Form=letter\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=dative\|Gender=masculine\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Case=genitive\|Gender=neuter\|Number=singular\|POS=DET\|Type=reflexive`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=participle`, `Case=accusative\|Definiteness=yes\|Degree=superlative\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=dative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=nominative\|Gender=masculine\|Number=plural\|POS=DET\|Person=first\|Type=possessive`, `Case=genitive\|Form=letter\|Gender=feminine\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=genitive\|Form=letter\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=nominative\|Definiteness=yes\|Degree=superlative\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=nominative\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Number=singular\|POS=AUX\|Person=second\|Type=auxiliary\|VForm=aorist`, `Case=dative\|Gender=masculine\|Number=plural\|POS=DET\|Type=reflexive`, `Case=genitive\|Definiteness=yes\|Degree=comparative\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Definiteness=yes\|Degree=comparative\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Case=dative\|Form=letter\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=ordinal`, `Case=genitive\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=locative\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `POS=SYM`, `Case=instrumental\|Definiteness=yes\|Degree=comparative\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Case=dative\|Definiteness=yes\|Degree=comparative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Gender=masculine\|Number=plural\|POS=DET\|Person=third\|Type=possessive`, `Case=dative\|Gender=feminine\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Animate=yes\|Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=participle`, `Case=genitive\|Form=letter\|Gender=masculine\|POS=NUM\|Type=cardinal`, `Case=dative\|Gender=masculine\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Case=genitive\|Gender=neuter\|Number=singular\|POS=DET\|Type=indefinite`, `Case=nominative\|Gender=neuter\|Number=plural\|POS=PRON\|Person=third\|Type=personal`, `Case=accusative\|Number=singular\|POS=PRON\|Person=first\|Type=personal`, `Case=vocative\|Gender=masculine\|Number=plural\|POS=NOUN\|Type=common`, `Case=instrumental\|Definiteness=yes\|Degree=comparative\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=accusative\|Form=letter\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=participle`, `Case=genitive\|Gender=neuter\|POS=PRON\|Person=third\|Type=indefinite`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=genitive\|Definiteness=yes\|Degree=superlative\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=general`, `Form=digit\|POS=NUM\|Type=multiple`, `Case=instrumental\|Form=letter\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=ordinal`, `Case=locative\|Definiteness=yes\|Degree=comparative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=instrumental\|Gender=feminine\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=accusative\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Case=dative\|Form=letter\|Gender=feminine\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=genitive\|Gender=neuter\|Number=plural\|POS=DET\|Type=reflexive`, `Case=accusative\|Gender=feminine\|Number=plural\|POS=DET\|Person=third\|Type=possessive`, `Case=accusative\|Gender=neuter\|Number=plural\|POS=DET\|Type=reflexive`, `Animate=yes\|Case=accusative\|Form=letter\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=dative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=participle`, `Number=plural\|POS=VERB\|Person=first\|Type=main\|VForm=imperative`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=possessive`, `Animate=yes\|Case=accusative\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=genitive\|Definiteness=no\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=participle`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=participle`, `Case=instrumental\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=participle`, `Case=locative\|Number=singular\|POS=PRON\|Type=reflexive`, `Case=accusative\|Definiteness=yes\|Degree=comparative\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Case=locative\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Case=locative\|Gender=masculine\|Number=plural\|POS=DET\|Type=reflexive`, `Case=nominative\|Form=letter\|Gender=masculine\|Number=plural\|POS=NUM\|Type=cardinal`, `Case=accusative\|Form=letter\|Gender=feminine\|POS=NUM\|Type=special`, `Case=accusative\|Gender=feminine\|Number=plural\|POS=DET\|Type=interrogative`, `Case=accusative\|Gender=neuter\|POS=PRON\|Person=third\|Type=interrogative`, `Case=locative\|Form=letter\|Gender=feminine\|POS=NUM\|Type=special`, `Animate=no\|Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=locative\|Gender=feminine\|Number=plural\|POS=PROPN\|Type=proper`, `Animate=no\|Case=accusative\|Definiteness=no\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=participle`, `Animate=yes\|Case=accusative\|Gender=masculine\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Case=locative\|Definiteness=yes\|Degree=comparative\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=instrumental\|Gender=masculine\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Case=genitive\|Definiteness=no\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=locative\|Gender=masculine\|Number=plural\|POS=PROPN\|Type=proper`, `Case=dative\|Number=plural\|POS=PRON\|Person=second\|Type=personal`, `Case=accusative\|Form=letter\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=ordinal`, `Animate=yes\|Case=accusative\|Gender=masculine\|Number=singular\|POS=PRON\|Type=indefinite`, `Case=nominative\|Gender=feminine\|Number=plural\|POS=DET\|Person=first\|Type=possessive`, `Case=genitive\|Number=singular\|POS=PRON\|Type=reflexive`, `Case=genitive\|Gender=neuter\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=dative\|Number=singular\|POS=PRON\|Type=reflexive`, `Case=genitive\|Gender=feminine\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Case=locative\|Gender=feminine\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Case=locative\|Form=letter\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=ordinal`, `Case=locative\|Definiteness=yes\|Degree=comparative\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Definiteness=yes\|Degree=superlative\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=general`, `Case=dative\|Gender=neuter\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=accusative\|Gender=masculine\|POS=PRON\|Person=third\|Type=indefinite`, `Case=locative\|Form=letter\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=ordinal`, `Case=instrumental\|Gender=feminine\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Case=instrumental\|Definiteness=yes\|Degree=superlative\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=dative\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Number=singular\|POS=VERB\|Person=third\|Type=main\|VForm=aorist`, `Case=locative\|Gender=feminine\|Number=plural\|POS=DET\|Person=second\|Type=possessive`, `Case=dative\|Gender=masculine\|Number=plural\|POS=DET\|Person=first\|Type=possessive`, `Case=instrumental\|Form=letter\|Gender=feminine\|POS=NUM\|Type=special`, `Case=accusative\|Gender=feminine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `POS=ADJ`, `Case=nominative\|Gender=neuter\|Number=plural\|POS=DET\|Person=first\|Type=possessive`, `Case=locative\|Gender=masculine\|Number=plural\|POS=DET\|Person=third\|Type=possessive`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=possessive`, `Case=instrumental\|Form=letter\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=locative\|Number=plural\|POS=PRON\|Person=third\|Type=personal`, `Case=instrumental\|Form=letter\|POS=NUM\|Type=cardinal`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=genitive\|Definiteness=no\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=genitive\|Definiteness=yes\|Degree=comparative\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=instrumental\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=dative\|Gender=feminine\|Number=singular\|POS=DET\|Type=indefinite`, `Case=nominative\|Gender=feminine\|Number=plural\|POS=DET\|Person=third\|Type=possessive`, `Case=accusative\|Gender=neuter\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Case=instrumental\|Form=letter\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=ordinal`, `Case=locative\|Gender=feminine\|Number=plural\|POS=DET\|Type=reflexive`, `Case=accusative\|Form=letter\|Gender=neuter\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=possessive`, `Case=instrumental\|Gender=masculine\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=instrumental\|Gender=neuter\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Case=accusative\|Form=letter\|Gender=masculine\|POS=NUM\|Type=special`, `Case=dative\|Form=letter\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=ordinal`, `Number=singular\|POS=VERB\|Person=second\|Type=main\|VForm=imperative`, `Case=nominative\|Gender=neuter\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=accusative\|Definiteness=yes\|Degree=superlative\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=general`, `Case=nominative\|Definiteness=yes\|Degree=superlative\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Form=roman\|POS=NUM\|Type=cardinal`, `Case=instrumental\|Gender=feminine\|Number=plural\|POS=DET\|Type=reflexive`, `Case=nominative\|Gender=feminine\|Number=singular\|POS=DET\|Person=second\|Type=possessive`, `Case=genitive\|Gender=feminine\|Number=singular\|POS=DET\|Person=second\|Type=possessive`, `Case=dative\|Gender=feminine\|Number=plural\|POS=DET\|Type=indefinite`, `Case=dative\|Form=letter\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=ordinal`, `Case=instrumental\|Gender=feminine\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=possessive`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=participle`, `Case=dative\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Form=letter\|Gender=masculine\|Number=plural\|POS=NUM\|Type=special`, `Case=locative\|Form=letter\|Gender=feminine\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=instrumental\|Form=letter\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=accusative\|Number=plural\|POS=PRON\|Person=second\|Type=personal`, `Case=genitive\|Number=plural\|POS=PRON\|Person=first\|Type=personal`, `Case=instrumental\|Gender=neuter\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Case=dative\|Gender=masculine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=locative\|Form=letter\|POS=NUM\|Type=cardinal`, `Case=locative\|Gender=neuter\|Number=singular\|POS=DET\|Type=reflexive`, `Case=instrumental\|Definiteness=yes\|Degree=comparative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=dative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=participle`, `Case=accusative\|Definiteness=yes\|Degree=superlative\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=nominative\|Definiteness=yes\|Degree=comparative\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=general`, `Case=dative\|Gender=masculine\|Number=singular\|POS=DET\|Type=reflexive`, `Case=dative\|Gender=feminine\|Number=singular\|POS=DET\|Type=reflexive`, `Case=genitive\|Gender=masculine\|POS=PRON\|Person=third\|Type=indefinite`, `Animate=no\|Case=accusative\|Gender=masculine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Form=roman\|POS=ADJ\|Type=ordinal`, `Case=dative\|Definiteness=no\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=genitive\|Gender=neuter\|Number=plural\|POS=PROPN\|Type=proper`, `Case=nominative\|Gender=feminine\|Number=plural\|POS=DET\|Type=reflexive`, `Case=locative\|Gender=feminine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=locative\|Form=letter\|Gender=masculine\|POS=NUM\|Type=cardinal`, `Number=plural\|POS=AUX\|Person=second\|Type=auxiliary\|VForm=aorist`, `Case=instrumental\|Gender=masculine\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Case=genitive\|POS=SYM`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=possessive`, `Case=nominative\|Gender=masculine\|POS=PRON\|Person=third\|Type=interrogative`, `Case=locative\|Definiteness=no\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `POS=PART`, `Case=locative\|Definiteness=yes\|Degree=comparative\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=instrumental\|Number=plural\|POS=PRON\|Person=first\|Type=personal`, `Case=genitive\|Form=letter\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=dative\|Definiteness=yes\|Degree=superlative\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=possessive`, `Case=dative\|Gender=masculine\|POS=PRON\|Person=third\|Type=interrogative`, `Case=instrumental\|Definiteness=no\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `POS=INTJ`, `Case=locative\|Gender=neuter\|POS=PRON\|Person=third\|Type=interrogative`, `Case=nominative\|Gender=masculine\|Number=singular\|POS=DET\|Person=second\|Type=possessive`, `POS=PART\|Type=affirmative`, `Number=singular\|POS=VERB\|Person=second\|Type=main\|VForm=present`, `Case=dative\|Number=singular\|POS=PRON\|Person=second\|Type=personal`, `Case=dative\|Gender=masculine\|Number=singular\|POS=DET\|Person=second\|Type=possessive`, `Animate=yes\|Case=accusative\|Gender=masculine\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=locative\|Definiteness=no\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=instrumental\|Gender=masculine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=genitive\|Gender=masculine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=accusative\|Gender=neuter\|Number=singular\|POS=DET\|Person=second\|Type=possessive`, `Case=nominative\|Gender=neuter\|Number=plural\|POS=DET\|Person=second\|Type=possessive`, `Case=accusative\|Gender=feminine\|Number=singular\|POS=DET\|Type=interrogative`, `Case=dative\|Definiteness=yes\|Degree=comparative\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=nominative\|Gender=feminine\|Number=plural\|POS=DET\|Person=second\|Type=possessive`, `Case=dative\|Gender=neuter\|Number=plural\|POS=DET\|Type=indefinite`, `Case=locative\|Gender=neuter\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=instrumental\|Gender=masculine\|POS=PRON\|Person=third\|Type=indefinite`, `Case=locative\|Definiteness=yes\|Degree=comparative\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Animate=yes\|Case=accusative\|Definiteness=no\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=dative\|Definiteness=yes\|Degree=comparative\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=general`, `Case=accusative\|Gender=feminine\|Number=plural\|POS=DET\|Person=first\|Type=possessive`, `Case=nominative\|Gender=neuter\|Number=plural\|POS=DET\|Person=third\|Type=possessive`, `Case=dative\|Gender=neuter\|Number=plural\|POS=DET\|Person=first\|Type=possessive`, `Case=nominative\|Gender=masculine\|Number=singular\|POS=DET\|Type=interrogative`, `Case=instrumental\|Gender=neuter\|Number=singular\|POS=DET\|Type=reflexive`, `Case=accusative\|Gender=masculine\|Number=plural\|POS=DET\|Person=first\|Type=possessive`, `Case=nominative\|Form=letter\|Gender=neuter\|POS=NUM\|Type=special`, `Case=locative\|Form=letter\|Gender=masculine\|Number=plural\|POS=NUM\|Type=cardinal`, `Case=accusative\|Number=singular\|POS=PRON\|Person=second\|Type=personal`, `Case=locative\|Number=singular\|POS=PRON\|Person=first\|Type=personal`, `Number=singular\|POS=AUX\|Person=second\|Type=auxiliary\|VForm=present`, `Case=vocative\|Gender=neuter\|Number=singular\|POS=NOUN\|Type=common`, `Case=genitive\|Number=singular\|POS=PRON\|Person=first\|Type=personal`, `Animate=yes\|Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=vocative\|Gender=feminine\|Number=singular\|POS=NOUN\|Type=common`, `Case=locative\|Form=letter\|Gender=neuter\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=vocative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=vocative\|Form=letter\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=instrumental\|Gender=masculine\|Number=plural\|POS=DET\|Person=third\|Type=possessive`, `Case=locative\|Gender=feminine\|Number=plural\|POS=DET\|Person=first\|Type=possessive`, `Case=instrumental\|Gender=masculine\|Number=plural\|POS=DET\|Person=first\|Type=possessive`, `Case=vocative\|Gender=feminine\|Number=plural\|POS=NOUN\|Type=common`, `Case=nominative\|Number=singular\|POS=PRON\|Person=second\|Type=personal`, `Case=genitive\|Number=plural\|POS=PRON\|Person=second\|Type=personal`, `Case=locative\|Gender=masculine\|Number=singular\|POS=DET\|Person=second\|Type=possessive`, `Number=plural\|POS=AUX\|Person=second\|Type=auxiliary\|VForm=imperative`, `Case=locative\|Definiteness=yes\|Degree=superlative\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=general`, `Case=instrumental\|Gender=feminine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=accusative\|Definiteness=yes\|Degree=superlative\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=accusative\|Gender=feminine\|Number=plural\|POS=PROPN\|Type=proper`, `Case=genitive\|Definiteness=no\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=accusative\|Definiteness=yes\|Degree=comparative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=locative\|Number=singular\|POS=PRON\|Person=second\|Type=personal`, `Case=instrumental\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=possessive`, `Case=vocative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Animate=yes\|Case=accusative\|Gender=masculine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=accusative\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Number=singular\|POS=AUX\|Person=third\|Type=auxiliary\|VForm=imperfect`, `Case=accusative\|Gender=feminine\|Number=singular\|POS=DET\|Person=second\|Type=possessive`, `Case=genitive\|Definiteness=no\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=participle`, `Case=nominative\|Form=letter\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=ordinal`, `Case=genitive\|Gender=masculine\|Number=singular\|POS=DET\|Person=second\|Type=possessive`, `Case=dative\|Form=letter\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=locative\|Definiteness=yes\|Degree=superlative\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `POS=ADV`, `Case=locative\|Form=letter\|Gender=masculine\|POS=NUM\|Type=special`, `Case=nominative\|Gender=masculine\|Number=plural\|POS=DET\|Person=second\|Type=possessive`, `Case=vocative\|Gender=masculine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=vocative\|Gender=masculine\|Number=singular\|POS=PROPN\|Type=proper`, `Case=accusative\|Gender=feminine\|Number=plural\|POS=DET\|Person=second\|Type=possessive`, `Case=dative\|Gender=feminine\|Number=singular\|POS=DET\|Person=second\|Type=possessive`, `Case=instrumental\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=nominative\|Form=letter\|Gender=masculine\|POS=NUM\|Type=special`, `Case=nominative\|Gender=neuter\|Number=plural\|POS=PROPN\|Type=proper`, `Animate=no\|Case=accusative\|Gender=masculine\|Number=singular\|POS=DET\|Person=second\|Type=possessive`, `Case=vocative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=vocative\|Gender=feminine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=dative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=participle`, `Case=genitive\|Number=singular\|POS=PRON\|Person=second\|Type=personal`, `Case=instrumental\|Gender=masculine\|Number=singular\|POS=DET\|Person=second\|Type=possessive`, `Case=nominative\|Form=letter\|Gender=masculine\|POS=NUM\|Type=cardinal`, `Case=accusative\|Definiteness=no\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=instrumental\|Gender=neuter\|Number=singular\|POS=DET\|Person=second\|Type=possessive`, `Case=genitive\|Gender=masculine\|Number=plural\|POS=DET\|Person=second\|Type=possessive`, `Case=instrumental\|Definiteness=yes\|Degree=superlative\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Animate=yes\|Case=accusative\|Definiteness=no\|Degree=superlative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=locative\|Definiteness=no\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Gender=feminine\|Number=plural\|POS=DET\|Type=interrogative`, `Case=dative\|Definiteness=yes\|Degree=comparative\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Case=instrumental\|Number=singular\|POS=PRON\|Person=first\|Type=personal`, `Case=instrumental\|Gender=neuter\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=vocative\|Number=singular\|POS=PRON\|Person=second\|Type=personal`, `Animate=yes\|Case=accusative\|Definiteness=no\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=participle`, `Case=dative\|Form=letter\|Gender=feminine\|POS=NUM\|Type=cardinal`, `Case=dative\|Definiteness=yes\|Degree=superlative\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=accusative\|Gender=masculine\|Number=plural\|POS=DET\|Person=second\|Type=possessive`, `Case=instrumental\|Gender=neuter\|Number=plural\|POS=DET\|Type=reflexive`, `Case=dative\|Gender=neuter\|POS=PRON\|Person=third\|Type=indefinite`, `Case=vocative\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Case=vocative\|Gender=masculine\|Number=plural\|POS=DET\|Person=first\|Type=possessive`, `Animate=yes\|Case=accusative\|Definiteness=no\|Degree=comparative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=accusative\|Gender=neuter\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=accusative\|Definiteness=yes\|Degree=comparative\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=general`, `Case=nominative\|Gender=feminine\|Number=singular\|POS=DET\|Type=interrogative`, `Case=locative\|Gender=masculine\|Number=plural\|POS=DET\|Person=second\|Type=possessive`, `Case=instrumental\|Gender=feminine\|Number=singular\|POS=DET\|Type=interrogative`, `Case=genitive\|Gender=neuter\|Number=singular\|POS=DET\|Person=second\|Type=possessive` | | **`tagger`** | `Agcfpay`, `Agcfpdy`, `Agcfpgy`, `Agcfpiy`, `Agcfply`, `Agcfpny`, `Agcfsay`, `Agcfsdy`, `Agcfsgy`, `Agcfsiy`, `Agcfsly`, `Agcfsny`, `Agcmpay`, `Agcmpgy`, `Agcmpiy`, `Agcmply`, `Agcmpny`, `Agcmsany`, `Agcmsay`, `Agcmsayn`, `Agcmsdy`, `Agcmsgy`, `Agcmsiy`, `Agcmsly`, `Agcmsny`, `Agcnpay`, `Agcnpdy`, `Agcnpgy`, `Agcnpny`, `Agcnsay`, `Agcnsdy`, `Agcnsgy`, `Agcnsiy`, `Agcnsly`, `Agcnsny`, `Agpfpay`, `Agpfpdy`, `Agpfpgy`, `Agpfpiy`, `Agpfply`, `Agpfpny`, `Agpfsay`, `Agpfsdy`, `Agpfsgy`, `Agpfsin`, `Agpfsiy`, `Agpfsly`, `Agpfsny`, `Agpfsvy`, `Agpmpay`, `Agpmpdy`, `Agpmpgy`, `Agpmpiy`, `Agpmply`, `Agpmpny`, `Agpmpvy`, `Agpmsan`, `Agpmsann`, `Agpmsany`, `Agpmsay`, `Agpmsayn`, `Agpmsayy`, `Agpmsdy`, `Agpmsgn`, `Agpmsgy`, `Agpmsiy`, `Agpmsln`, `Agpmsly`, `Agpmsnn`, `Agpmsny`, `Agpmsvy`, `Agpnpay`, `Agpnpdy`, `Agpnpgy`, `Agpnpiy`, `Agpnply`, `Agpnpny`, `Agpnsay`, `Agpnsdy`, `Agpnsgn`, `Agpnsgy`, `Agpnsiy`, `Agpnsln`, `Agpnsly`, `Agpnsny`, `Agsfpay`, `Agsfpdy`, `Agsfpgy`, `Agsfpiy`, `Agsfply`, `Agsfpny`, `Agsfsay`, `Agsfsgy`, `Agsfsiy`, `Agsfsly`, `Agsfsny`, `Agsmpay`, `Agsmpdy`, `Agsmpgy`, `Agsmpiy`, `Agsmply`, `Agsmpny`, `Agsmpvy`, `Agsmsany`, `Agsmsay`, `Agsmsayn`, `Agsmsayy`, `Agsmsdy`, `Agsmsgy`, `Agsmsiy`, `Agsmsly`, `Agsmsny`, `Agsnpay`, `Agsnpgy`, `Agsnply`, `Agsnpny`, `Agsnsay`, `Agsnsdy`, `Agsnsiy`, `Agsnsly`, `Agsnsny`, `Appfpay`, `Appfpdy`, `Appfpgy`, `Appfpiy`, `Appfply`, `Appfpny`, `Appfsay`, `Appfsgy`, `Appfsiy`, `Appfsly`, `Appfsny`, `Appmpay`, `Appmpdy`, `Appmpgy`, `Appmpiy`, `Appmply`, `Appmpny`, `Appmsann`, `Appmsany`, `Appmsayn`, `Appmsayy`, `Appmsdy`, `Appmsgn`, `Appmsgy`, `Appmsiy`, `Appmsly`, `Appmsnn`, `Appmsny`, `Appnpay`, `Appnpdy`, `Appnpgy`, `Appnpiy`, `Appnply`, `Appnpny`, `Appnsay`, `Appnsgy`, `Appnsly`, `Appnsny`, `Aspfpay`, `Aspfpgy`, `Aspfpiy`, `Aspfply`, `Aspfpny`, `Aspfsay`, `Aspfsdy`, `Aspfsgy`, `Aspfsiy`, `Aspfsly`, `Aspfsny`, `Aspmpay`, `Aspmpgy`, `Aspmply`, `Aspmpny`, `Aspmsayn`, `Aspmsayy`, `Aspmsdy`, `Aspmsgn`, `Aspmsgy`, `Aspmsiy`, `Aspmsln`, `Aspmsly`, `Aspmsnn`, `Aspnpay`, `Aspnpgy`, `Aspnpny`, `Aspnsay`, `Aspnsdn`, `Aspnsgn`, `Aspnsgy`, `Aspnsly`, `Aspnsny`, `Cc`, `Cs`, `I`, `Mdc`, `Mdm`, `Mdo`, `Mds`, `Mlc`, `Mlc--g`, `Mlc--i`, `Mlc--l`, `Mlcf-a`, `Mlcf-d`, `Mlcf-g`, `Mlcf-n`, `Mlcfsa`, `Mlcfsd`, `Mlcfsg`, `Mlcfsi`, `Mlcfsl`, `Mlcfsn`, `Mlcm-a`, `Mlcm-g`, `Mlcm-l`, `Mlcm-n`, `Mlcmpl`, `Mlcmpn`, `Mlcmsan`, `Mlcmsay`, `Mlcmsg`, `Mlcmsi`, `Mlcmsl`, `Mlcmsn`, `Mlcn-n`, `Mlcnsa`, `Mlcnsg`, `Mlcnsl`, `Mlcnsn`, `Mlofpa`, `Mlofpd`, `Mlofpg`, `Mlofpi`, `Mlofpl`, `Mlofpn`, `Mlofsa`, `Mlofsd`, `Mlofsg`, `Mlofsi`, `Mlofsl`, `Mlofsn`, `Mlompa`, `Mlompd`, `Mlompg`, `Mlompi`, `Mlompl`, `Mlompn`, `Mlomsan`, `Mlomsay`, `Mlomsd`, `Mlomsg`, `Mlomsi`, `Mlomsl`, `Mlomsn`, `Mlomsv`, `Mlonpa`, `Mlonpg`, `Mlonpl`, `Mlonpn`, `Mlonsa`, `Mlonsd`, `Mlonsg`, `Mlonsi`, `Mlonsl`, `Mlonsn`, `Mls`, `Mlsf-a`, `Mlsf-d`, `Mlsf-g`, `Mlsf-i`, `Mlsf-l`, `Mlsf-n`, `Mlsm-a`, `Mlsm-g`, `Mlsm-l`, `Mlsm-n`, `Mlsmpn`, `Mlsn-n`, `Mrc`, `Mro`, `Ncfpa`, `Ncfpd`, `Ncfpg`, `Ncfpi`, `Ncfpl`, `Ncfpn`, `Ncfpv`, `Ncfsa`, `Ncfsd`, `Ncfsg`, `Ncfsi`, `Ncfsl`, `Ncfsn`, `Ncfsv`, `Ncmpa`, `Ncmpd`, `Ncmpg`, `Ncmpi`, `Ncmpl`, `Ncmpn`, `Ncmpv`, `Ncmsan`, `Ncmsay`, `Ncmsd`, `Ncmsg`, `Ncmsi`, `Ncmsl`, `Ncmsn`, `Ncmsv`, `Ncnpa`, `Ncnpd`, `Ncnpg`, `Ncnpi`, `Ncnpl`, `Ncnpn`, `Ncnsa`, `Ncnsd`, `Ncnsg`, `Ncnsi`, `Ncnsl`, `Ncnsn`, `Ncnsv`, `Npfpa`, `Npfpg`, `Npfpl`, `Npfpn`, `Npfsa`, `Npfsd`, `Npfsg`, `Npfsi`, `Npfsl`, `Npfsn`, `Npmpa`, `Npmpd`, `Npmpg`, `Npmpi`, `Npmpl`, `Npmpn`, `Npmsan`, `Npmsay`, `Npmsd`, `Npmsg`, `Npmsi`, `Npmsl`, `Npmsn`, `Npmsv`, `Npnpg`, `Npnpn`, `Npnsa`, `Npnsd`, `Npnsg`, `Npnsi`, `Npnsl`, `Npnsn`, `Pd-fpa`, `Pd-fpd`, `Pd-fpg`, `Pd-fpi`, `Pd-fpl`, `Pd-fpn`, `Pd-fsa`, `Pd-fsd`, `Pd-fsg`, `Pd-fsi`, `Pd-fsl`, `Pd-fsn`, `Pd-mpa`, `Pd-mpd`, `Pd-mpg`, `Pd-mpi`, `Pd-mpl`, `Pd-mpn`, `Pd-msan`, `Pd-msay`, `Pd-msd`, `Pd-msg`, `Pd-msi`, `Pd-msl`, `Pd-msn`, `Pd-npa`, `Pd-npd`, `Pd-npg`, `Pd-npi`, `Pd-npn`, `Pd-nsa`, `Pd-nsd`, `Pd-nsg`, `Pd-nsi`, `Pd-nsl`, `Pd-nsn`, `Pi-fpa`, `Pi-fpd`, `Pi-fpg`, `Pi-fpi`, `Pi-fpl`, `Pi-fpn`, `Pi-fsa`, `Pi-fsd`, `Pi-fsg`, `Pi-fsi`, `Pi-fsl`, `Pi-fsn`, `Pi-mpa`, `Pi-mpd`, `Pi-mpg`, `Pi-mpi`, `Pi-mpl`, `Pi-mpn`, `Pi-msan`, `Pi-msay`, `Pi-msd`, `Pi-msg`, `Pi-msi`, `Pi-msl`, `Pi-msn`, `Pi-npa`, `Pi-npd`, `Pi-npg`, `Pi-npi`, `Pi-npl`, `Pi-npn`, `Pi-nsa`, `Pi-nsd`, `Pi-nsg`, `Pi-nsi`, `Pi-nsl`, `Pi-nsn`, `Pi3m-a`, `Pi3m-d`, `Pi3m-g`, `Pi3m-i`, `Pi3m-n`, `Pi3n-a`, `Pi3n-d`, `Pi3n-g`, `Pi3n-i`, `Pi3n-l`, `Pi3n-n`, `Pp1-pa`, `Pp1-pd`, `Pp1-pg`, `Pp1-pi`, `Pp1-pl`, `Pp1-pn`, `Pp1-sa`, `Pp1-sd`, `Pp1-sg`, `Pp1-si`, `Pp1-sl`, `Pp1-sn`, `Pp2-pa`, `Pp2-pd`, `Pp2-pg`, `Pp2-pn`, `Pp2-sa`, `Pp2-sd`, `Pp2-sg`, `Pp2-sl`, `Pp2-sn`, `Pp2-sv`, `Pp3-pa`, `Pp3-pd`, `Pp3-pg`, `Pp3-pi`, `Pp3-pl`, `Pp3fpn`, `Pp3fsa`, `Pp3fsd`, `Pp3fsg`, `Pp3fsi`, `Pp3fsl`, `Pp3fsn`, `Pp3mpn`, `Pp3msa`, `Pp3msd`, `Pp3msg`, `Pp3msi`, `Pp3msl`, `Pp3msn`, `Pp3npn`, `Pp3nsa`, `Pp3nsi`, `Pp3nsn`, `Pq-fpa`, `Pq-fpn`, `Pq-fsa`, `Pq-fsi`, `Pq-fsl`, `Pq-fsn`, `Pq-mpn`, `Pq-msn`, `Pq-nsn`, `Pq3m-d`, `Pq3m-n`, `Pq3n-a`, `Pq3n-l`, `Pq3n-n`, `Ps1fpa`, `Ps1fpd`, `Ps1fpg`, `Ps1fpl`, `Ps1fpn`, `Ps1fsa`, `Ps1fsd`, `Ps1fsg`, `Ps1fsi`, `Ps1fsl`, `Ps1fsn`, `Ps1fsv`, `Ps1mpa`, `Ps1mpd`, `Ps1mpg`, `Ps1mpi`, `Ps1mpl`, `Ps1mpn`, `Ps1mpv`, `Ps1msan`, `Ps1msay`, `Ps1msd`, `Ps1msg`, `Ps1msi`, `Ps1msl`, `Ps1msn`, `Ps1msv`, `Ps1npd`, `Ps1npn`, `Ps1nsa`, `Ps1nsg`, `Ps1nsi`, `Ps1nsl`, `Ps1nsn`, `Ps2fpa`, `Ps2fpl`, `Ps2fpn`, `Ps2fsa`, `Ps2fsd`, `Ps2fsg`, `Ps2fsn`, `Ps2mpa`, `Ps2mpg`, `Ps2mpl`, `Ps2mpn`, `Ps2msan`, `Ps2msd`, `Ps2msg`, `Ps2msi`, `Ps2msl`, `Ps2msn`, `Ps2npn`, `Ps2nsa`, `Ps2nsg`, `Ps2nsi`, `Ps2nsl`, `Ps2nsn`, `Ps3fpa`, `Ps3fpg`, `Ps3fpl`, `Ps3fpn`, `Ps3fsa`, `Ps3fsd`, `Ps3fsg`, `Ps3fsi`, `Ps3fsl`, `Ps3fsn`, `Ps3mpa`, `Ps3mpd`, `Ps3mpg`, `Ps3mpi`, `Ps3mpl`, `Ps3mpn`, `Ps3msan`, `Ps3msay`, `Ps3msd`, `Ps3msg`, `Ps3msi`, `Ps3msl`, `Ps3msn`, `Ps3npa`, `Ps3npg`, `Ps3npl`, `Ps3npn`, `Ps3nsa`, `Ps3nsg`, `Ps3nsi`, `Ps3nsl`, `Ps3nsn`, `Px--sa`, `Px--sd`, `Px--sg`, `Px--si`, `Px--sl`, `Px-fpa`, `Px-fpg`, `Px-fpi`, `Px-fpl`, `Px-fpn`, `Px-fsa`, `Px-fsd`, `Px-fsg`, `Px-fsi`, `Px-fsl`, `Px-mpa`, `Px-mpd`, `Px-mpg`, `Px-mpi`, `Px-mpl`, `Px-msan`, `Px-msay`, `Px-msd`, `Px-msg`, `Px-msi`, `Px-msl`, `Px-npa`, `Px-npg`, `Px-npi`, `Px-npl`, `Px-nsa`, `Px-nsg`, `Px-nsi`, `Px-nsl`, `Qo`, `Qq`, `Qr`, `Qz`, `Rgc`, `Rgp`, `Rgs`, `Rr`, `Sa`, `Sd`, `Sg`, `Si`, `Sl`, `Vaa1p`, `Vaa1s`, `Vaa2p`, `Vaa2s`, `Vaa3p`, `Vaa3s`, `Vae3s`, `Vam2p`, `Van`, `Vap-pf`, `Vap-pm`, `Vap-pn`, `Vap-sf`, `Vap-sm`, `Vap-sn`, `Var1p`, `Var1s`, `Var2p`, `Var2s`, `Var3p`, `Var3s`, `Vma3s`, `Vmm1p`, `Vmm2p`, `Vmm2s`, `Vmn`, `Vmp-pf`, `Vmp-pm`, `Vmp-pn`, `Vmp-sf`, `Vmp-sm`, `Vmp-sn`, `Vmr1p`, `Vmr1s`, `Vmr2p`, `Vmr2s`, `Vmr3p`, `Vmr3s`, `X`, `Xf`, `Y`, `Z` | | **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `dep`, `det`, `discourse`, `expl`, `fixed`, `flat`, `goeswith`, `iobj`, `mark`, `nmod`, `nsubj`, `nummod`, `obj`, `obl`, `orphan`, `parataxis`, `punct`, `xcomp` | </details> ### Accuracy | Type | Score | | --- | --- | | `POS_ACC` | 98.70 | | `MORPH_ACC` | 95.55 | | `TAG_ACC` | 95.52 | | `DEP_UAS` | 91.29 | | `DEP_LAS` | 86.17 | | `SENTS_P` | 95.36 | | `SENTS_R` | 96.16 | | `SENTS_F` | 95.76 | | `TRANSFORMER_LOSS` | 24668298.17 | | `MORPHOLOGIZER_LOSS` | 362811.40 | | `TAGGER_LOSS` | 349660.11 | | `PARSER_LOSS` | 2088768.64 |
{"language": ["hr"], "tags": ["spacy", "token-classification"]}
danielvasic/hr_bertic_pipeline
null
[ "spacy", "token-classification", "hr", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "hr" ]
TAGS #spacy #token-classification #hr #model-index #region-us
### Label Scheme View label scheme (1392 labels for 3 components) ### Accuracy
[ "### Label Scheme\n\n\n\nView label scheme (1392 labels for 3 components)", "### Accuracy" ]
[ "TAGS\n#spacy #token-classification #hr #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (1392 labels for 3 components)", "### Accuracy" ]
token-classification
spacy
| Feature | Description | | --- | --- | | **Name** | `hr_hroberta_pipeline` | | **Version** | `0.0.1` | | **spaCy** | `>=3.1.3,<3.2.0` | | **Default Pipeline** | `transformer`, `morphologizer`, `tagger`, `parser` | | **Components** | `transformer`, `morphologizer`, `tagger`, `parser` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | HR500k | | **License** | CC BY-SA 4.0 | | **Author** | [Daniel Vasić](https://github.com/danielvasic) | ### Label Scheme <details> <summary>View label scheme (1392 labels for 3 components)</summary> | Component | Labels | | --- | --- | | **`morphologizer`** | `Case=nominative\|Gender=masculine\|Number=singular\|POS=NOUN\|Type=common`, `Case=genitive\|Gender=feminine\|Number=singular\|POS=NOUN\|Type=common`, `Case=locative\|POS=ADP`, `Case=locative\|Gender=neuter\|Number=singular\|POS=PROPN\|Type=proper`, `Case=instrumental\|POS=ADP`, `Case=instrumental\|Gender=neuter\|Number=singular\|POS=NOUN\|Type=common`, `Case=nominative\|Gender=neuter\|Number=singular\|POS=PROPN\|Type=proper`, `Degree=positive\|POS=ADV\|Type=general`, `Number=singular\|POS=VERB\|Person=third\|Type=main\|VForm=present`, `Animate=no\|Case=accusative\|Gender=masculine\|Number=singular\|POS=NOUN\|Type=common`, `Case=locative\|Gender=neuter\|Number=singular\|POS=NOUN\|Type=common`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=genitive\|Gender=feminine\|Number=plural\|POS=NOUN\|Type=common`, `POS=PUNCT`, `POS=PART\|Type=modal`, `Case=locative\|Gender=masculine\|Number=singular\|POS=NOUN\|Type=common`, `POS=SCONJ\|Type=subordinating`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Gender=feminine\|Number=singular\|POS=NOUN\|Type=common`, `Case=nominative\|Gender=feminine\|Number=singular\|POS=PROPN\|Type=proper`, `Case=accusative\|Gender=neuter\|Number=plural\|POS=NOUN\|Type=common`, `Case=accusative\|Number=singular\|POS=PRON\|Type=reflexive`, `Case=genitive\|Gender=neuter\|Number=singular\|POS=NOUN\|Type=common`, `Case=genitive\|Gender=neuter\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `POS=CCONJ\|Type=coordinating`, `Case=genitive\|POS=ADP`, `Case=dative\|Gender=neuter\|Number=singular\|POS=NOUN\|Type=common`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=genitive\|Gender=masculine\|Number=singular\|POS=NOUN\|Type=common`, `Case=nominative\|Gender=masculine\|Number=plural\|POS=NOUN\|Type=common`, `Number=plural\|POS=VERB\|Person=third\|Type=main\|VForm=present`, `Number=singular\|POS=AUX\|Person=third\|Type=auxiliary\|VForm=present`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Gender=masculine\|Number=singular\|POS=DET\|Type=indefinite`, `Case=accusative\|POS=ADP`, `Case=accusative\|Gender=feminine\|Number=singular\|POS=NOUN\|Type=common`, `Case=nominative\|Definiteness=no\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Gender=neuter\|POS=PRON\|Person=third\|Type=indefinite`, `Animate=no\|Case=accusative\|Definiteness=no\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=accusative\|Gender=neuter\|Number=singular\|POS=NOUN\|Type=common`, `Case=nominative\|Gender=masculine\|Number=plural\|POS=DET\|Type=indefinite`, `POS=VERB\|Type=main\|VForm=infinitive`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=accusative\|Gender=feminine\|Number=plural\|POS=NOUN\|Type=common`, `Case=nominative\|Form=letter\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=ordinal`, `POS=PART\|Type=negative`, `Case=accusative\|Gender=neuter\|POS=PRON\|Person=third\|Type=indefinite`, `Case=instrumental\|Gender=masculine\|Number=singular\|POS=NOUN\|Type=common`, `Degree=comparative\|POS=ADV\|Type=general`, `Case=nominative\|Gender=masculine\|Number=singular\|POS=PROPN\|Type=proper`, `Case=nominative\|Form=letter\|Gender=masculine\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=nominative\|Gender=masculine\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=nominative\|Gender=masculine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Gender=masculine\|Number=singular\|POS=VERB\|Type=main\|VForm=participle`, `Case=locative\|Gender=feminine\|Number=singular\|POS=NOUN\|Type=common`, `Case=nominative\|Number=singular\|POS=PRON\|Person=first\|Type=personal`, `Form=digit\|POS=ADJ\|Type=ordinal`, `Number=singular\|POS=AUX\|Person=first\|Type=auxiliary\|VForm=present`, `Number=plural\|POS=AUX\|Person=third\|Type=auxiliary\|VForm=present`, `Case=accusative\|Number=plural\|POS=PRON\|Person=first\|Type=personal`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=nominative\|Gender=feminine\|Number=plural\|POS=NOUN\|Type=common`, `Gender=feminine\|Number=plural\|POS=VERB\|Type=main\|VForm=participle`, `Animate=no\|Case=accusative\|Gender=masculine\|Number=singular\|POS=DET\|Type=reflexive`, `Case=nominative\|Gender=neuter\|Number=singular\|POS=DET\|Type=demonstrative`, `Gender=neuter\|Number=singular\|POS=VERB\|Type=main\|VForm=participle`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=participle`, `Case=accusative\|Gender=feminine\|Number=singular\|POS=DET\|Type=indefinite`, `Degree=superlative\|POS=ADV\|Type=general`, `Case=accusative\|Gender=feminine\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Animate=no\|Case=accusative\|Gender=masculine\|Number=singular\|POS=PROPN\|Type=proper`, `Case=locative\|Gender=masculine\|Number=plural\|POS=DET\|Type=indefinite`, `Case=locative\|Gender=masculine\|Number=plural\|POS=NOUN\|Type=common`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=participle`, `Case=nominative\|Gender=masculine\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Animate=no\|Case=accusative\|Gender=masculine\|Number=singular\|POS=DET\|Type=indefinite`, `Case=nominative\|Definiteness=no\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=genitive\|Gender=masculine\|Number=plural\|POS=PROPN\|Type=proper`, `Case=accusative\|Gender=masculine\|Number=plural\|POS=DET\|Type=reflexive`, `Case=accusative\|Gender=masculine\|Number=plural\|POS=NOUN\|Type=common`, `Case=genitive\|Gender=neuter\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=nominative\|Gender=feminine\|Number=plural\|POS=DET\|Type=indefinite`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Case=genitive\|Gender=masculine\|Number=plural\|POS=NOUN\|Type=common`, `Number=plural\|POS=VERB\|Person=first\|Type=main\|VForm=present`, `Case=nominative\|Gender=neuter\|Number=singular\|POS=NOUN\|Type=common`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Gender=feminine\|Number=plural\|POS=AUX\|Type=auxiliary\|VForm=participle`, `Gender=masculine\|Number=singular\|POS=AUX\|Type=auxiliary\|VForm=participle`, `Gender=masculine\|Number=plural\|POS=VERB\|Type=main\|VForm=participle`, `Form=digit\|POS=NUM\|Type=cardinal`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=participle`, `Case=accusative\|Gender=masculine\|Number=plural\|POS=DET\|Type=indefinite`, `Gender=feminine\|Number=singular\|POS=VERB\|Type=main\|VForm=participle`, `Case=accusative\|Form=letter\|Gender=feminine\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=locative\|Gender=feminine\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=instrumental\|Gender=neuter\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=locative\|Gender=feminine\|Number=plural\|POS=NOUN\|Type=common`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Case=accusative\|Definiteness=yes\|Degree=comparative\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=genitive\|Gender=feminine\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=nominative\|Definiteness=no\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=participle`, `Animate=no\|Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Gender=neuter\|Number=singular\|POS=AUX\|Type=auxiliary\|VForm=participle`, `Case=locative\|Gender=neuter\|Number=plural\|POS=NOUN\|Type=common`, `Case=nominative\|Gender=neuter\|Number=plural\|POS=DET\|Type=indefinite`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=participle`, `Case=nominative\|Gender=neuter\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=nominative\|Gender=neuter\|Number=plural\|POS=NOUN\|Type=common`, `Case=genitive\|Number=plural\|POS=PRON\|Person=third\|Type=personal`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=participle`, `Number=plural\|POS=AUX\|Person=third\|Type=auxiliary\|VForm=aorist`, `Case=nominative\|Definiteness=yes\|Degree=comparative\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Case=genitive\|Definiteness=yes\|Degree=comparative\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Case=accusative\|Definiteness=yes\|Degree=comparative\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Gender=masculine\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=locative\|Gender=masculine\|Number=singular\|POS=DET\|Type=indefinite`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Animate=yes\|Case=accusative\|Gender=masculine\|Number=singular\|POS=NOUN\|Type=common`, `Animate=yes\|Case=accusative\|Gender=masculine\|Number=singular\|POS=PROPN\|Type=proper`, `Case=dative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=dative\|Gender=masculine\|Number=singular\|POS=NOUN\|Type=common`, `Case=locative\|Gender=feminine\|Number=singular\|POS=PROPN\|Type=proper`, `Case=dative\|Gender=masculine\|Number=singular\|POS=PROPN\|Type=proper`, `Case=locative\|Gender=neuter\|Number=plural\|POS=DET\|Person=third\|Type=possessive`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=general`, `Number=singular\|POS=AUX\|Person=third\|Type=auxiliary\|VForm=aorist`, `POS=X`, `Case=genitive\|Form=letter\|POS=NUM\|Type=cardinal`, `Case=genitive\|Form=letter\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=ordinal`, `Case=genitive\|Gender=neuter\|Number=plural\|POS=NOUN\|Type=common`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Form=letter\|Gender=feminine\|POS=NUM\|Type=cardinal`, `Form=letter\|POS=NUM\|Type=cardinal`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=participle`, `Case=locative\|Gender=masculine\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Case=genitive\|Gender=feminine\|Number=singular\|POS=PROPN\|Type=proper`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=genitive\|Definiteness=yes\|Degree=superlative\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Gender=feminine\|Number=singular\|POS=DET\|Type=indefinite`, `Case=accusative\|Gender=neuter\|Number=singular\|POS=DET\|Type=indefinite`, `Case=nominative\|Gender=neuter\|Number=singular\|POS=DET\|Type=indefinite`, `Case=nominative\|Gender=masculine\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Case=genitive\|Gender=masculine\|Number=singular\|POS=PROPN\|Type=proper`, `Case=nominative\|Form=letter\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=instrumental\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=genitive\|Gender=feminine\|Number=plural\|POS=DET\|Type=reflexive`, `POS=X\|Type=foreign`, `Number=plural\|POS=VERB\|Person=second\|Type=main\|VForm=present`, `POS=PART\|Type=interrogative`, `Case=locative\|Gender=feminine\|Number=singular\|POS=DET\|Type=reflexive`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Case=dative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=dative\|Gender=feminine\|Number=singular\|POS=NOUN\|Type=common`, `POS=ADV\|Type=participle`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Case=locative\|Definiteness=yes\|Degree=superlative\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Case=genitive\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Number=singular\|POS=VERB\|Person=first\|Type=main\|VForm=present`, `Case=locative\|Gender=masculine\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=instrumental\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=instrumental\|Gender=feminine\|Number=plural\|POS=NOUN\|Type=common`, `Case=dative\|Gender=masculine\|Number=plural\|POS=NOUN\|Type=common`, `Case=instrumental\|Gender=masculine\|Number=singular\|POS=PROPN\|Type=proper`, `Animate=yes\|Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=dative\|Gender=feminine\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Animate=yes\|Case=accusative\|Form=letter\|Gender=masculine\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=nominative\|Number=plural\|POS=PRON\|Person=first\|Type=personal`, `Number=plural\|POS=AUX\|Person=first\|Type=auxiliary\|VForm=present`, `POS=AUX\|Type=auxiliary\|VForm=infinitive`, `Case=locative\|Gender=masculine\|Number=singular\|POS=PROPN\|Type=proper`, `Case=nominative\|Definiteness=yes\|Degree=superlative\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=genitive\|Gender=masculine\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=instrumental\|Gender=feminine\|Number=singular\|POS=NOUN\|Type=common`, `Gender=feminine\|Number=singular\|POS=AUX\|Type=auxiliary\|VForm=participle`, `Case=instrumental\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=accusative\|Gender=feminine\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Case=accusative\|Gender=masculine\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Case=nominative\|Gender=neuter\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Case=instrumental\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Case=instrumental\|Gender=masculine\|Number=plural\|POS=NOUN\|Type=common`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=possessive`, `Animate=no\|Case=accusative\|Form=letter\|Gender=masculine\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=accusative\|Gender=feminine\|Number=singular\|POS=PROPN\|Type=proper`, `Case=genitive\|Gender=neuter\|Number=singular\|POS=PROPN\|Type=proper`, `Case=accusative\|Gender=neuter\|Number=plural\|POS=DET\|Person=third\|Type=possessive`, `Case=dative\|Gender=feminine\|Number=singular\|POS=PROPN\|Type=proper`, `Case=accusative\|Gender=neuter\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=nominative\|Form=letter\|Gender=feminine\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=genitive\|Definiteness=yes\|Degree=superlative\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=nominative\|Gender=feminine\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Case=instrumental\|Definiteness=yes\|Degree=comparative\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=general`, `Case=accusative\|Gender=feminine\|Number=singular\|POS=DET\|Type=reflexive`, `Case=nominative\|Gender=masculine\|Number=plural\|POS=PRON\|Person=third\|Type=personal`, `Case=dative\|Number=plural\|POS=PRON\|Person=first\|Type=personal`, `Case=nominative\|Gender=neuter\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Case=genitive\|Gender=masculine\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Case=accusative\|Gender=neuter\|Number=singular\|POS=DET\|Type=reflexive`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=general`, `Case=locative\|Gender=neuter\|Number=plural\|POS=DET\|Type=reflexive`, `Case=nominative\|Gender=masculine\|POS=PRON\|Person=third\|Type=indefinite`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=general`, `Case=genitive\|Gender=feminine\|Number=singular\|POS=DET\|Type=indefinite`, `Number=plural\|POS=AUX\|Person=first\|Type=auxiliary\|VForm=aorist`, `Animate=no\|Case=accusative\|Gender=masculine\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=dative\|Number=singular\|POS=PRON\|Person=first\|Type=personal`, `Case=nominative\|Form=letter\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=locative\|Gender=masculine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=dative\|Definiteness=yes\|Degree=comparative\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `POS=NOUN`, `Case=vocative\|Gender=masculine\|Number=singular\|POS=NOUN\|Type=common`, `Case=instrumental\|Gender=masculine\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=locative\|Gender=neuter\|Number=singular\|POS=DET\|Type=indefinite`, `Case=accusative\|Gender=masculine\|Number=plural\|POS=DET\|Person=third\|Type=possessive`, `Case=instrumental\|Gender=feminine\|Number=singular\|POS=PROPN\|Type=proper`, `Case=accusative\|Gender=feminine\|Number=plural\|POS=DET\|Type=indefinite`, `Case=accusative\|Form=letter\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=ordinal`, `Case=genitive\|Gender=feminine\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=genitive\|Gender=masculine\|Number=plural\|POS=DET\|Person=first\|Type=possessive`, `Case=locative\|Gender=neuter\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=locative\|Number=plural\|POS=PRON\|Person=first\|Type=personal`, `Case=locative\|Gender=masculine\|Number=plural\|POS=DET\|Person=first\|Type=possessive`, `Case=nominative\|Gender=feminine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=nominative\|Form=letter\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=ordinal`, `Case=dative\|Gender=masculine\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=accusative\|Gender=neuter\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=locative\|Gender=feminine\|Number=singular\|POS=DET\|Type=indefinite`, `Case=dative\|Gender=feminine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=nominative\|Gender=neuter\|POS=PRON\|Person=third\|Type=interrogative`, `Case=nominative\|Number=plural\|POS=PRON\|Person=second\|Type=personal`, `Case=genitive\|Gender=masculine\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=genitive\|Gender=masculine\|Number=singular\|POS=DET\|Type=reflexive`, `Case=locative\|Gender=feminine\|Number=plural\|POS=DET\|Type=indefinite`, `Number=plural\|POS=AUX\|Person=second\|Type=auxiliary\|VForm=present`, `Case=instrumental\|Gender=masculine\|Number=singular\|POS=DET\|Type=reflexive`, `Case=dative\|Gender=feminine\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=dative\|Gender=feminine\|Number=plural\|POS=NOUN\|Type=common`, `Number=singular\|POS=AUX\|Person=first\|Type=auxiliary\|VForm=aorist`, `Case=locative\|Gender=masculine\|Number=singular\|POS=DET\|Type=reflexive`, `Case=accusative\|Number=plural\|POS=PRON\|Person=third\|Type=personal`, `Case=genitive\|Gender=feminine\|Number=plural\|POS=DET\|Person=first\|Type=possessive`, `Case=nominative\|Definiteness=yes\|Degree=comparative\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=participle`, `Case=nominative\|Gender=neuter\|Number=singular\|POS=DET\|Type=interrogative`, `Case=nominative\|Gender=neuter\|Number=singular\|POS=DET\|Person=second\|Type=possessive`, `Case=locative\|Gender=feminine\|Number=singular\|POS=DET\|Type=interrogative`, `Case=locative\|Gender=neuter\|Number=singular\|POS=DET\|Person=second\|Type=possessive`, `Case=dative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Animate=no\|Case=accusative\|Form=letter\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=instrumental\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=participle`, `Case=nominative\|Gender=feminine\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=dative\|Gender=masculine\|POS=PRON\|Person=third\|Type=indefinite`, `Case=instrumental\|Gender=neuter\|POS=PRON\|Person=third\|Type=indefinite`, `Case=dative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Case=dative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=participle`, `Case=instrumental\|Gender=feminine\|Number=singular\|POS=DET\|Type=indefinite`, `Case=accusative\|Gender=neuter\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Animate=yes\|Case=accusative\|Gender=masculine\|Number=singular\|POS=DET\|Type=indefinite`, `Case=dative\|POS=ADP`, `Case=instrumental\|Number=singular\|POS=PRON\|Type=reflexive`, `Case=genitive\|Gender=neuter\|Number=plural\|POS=DET\|Type=indefinite`, `Case=locative\|Gender=neuter\|Number=plural\|POS=DET\|Type=indefinite`, `Case=locative\|Gender=masculine\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Gender=neuter\|Number=plural\|POS=VERB\|Type=main\|VForm=participle`, `Case=nominative\|Form=letter\|Gender=neuter\|POS=NUM\|Type=cardinal`, `Case=genitive\|Definiteness=yes\|Degree=comparative\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=general`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=participle`, `Case=locative\|Gender=feminine\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Case=instrumental\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=possessive`, `Case=dative\|Form=letter\|Gender=feminine\|POS=NUM\|Type=special`, `Case=accusative\|Form=letter\|Gender=feminine\|POS=NUM\|Type=cardinal`, `Case=nominative\|Gender=feminine\|Number=plural\|POS=PROPN\|Type=proper`, `Case=instrumental\|Gender=feminine\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=genitive\|Gender=feminine\|Number=plural\|POS=DET\|Type=indefinite`, `Form=letter\|POS=NUM\|Type=special`, `Case=accusative\|Form=letter\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=ordinal`, `Case=instrumental\|Gender=masculine\|Number=plural\|POS=DET\|Type=indefinite`, `Case=genitive\|Form=letter\|Gender=feminine\|POS=NUM\|Type=special`, `Case=genitive\|Form=letter\|Gender=feminine\|POS=NUM\|Type=cardinal`, `Animate=no\|Case=accusative\|Definiteness=yes\|Degree=comparative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Gender=masculine\|Number=plural\|POS=DET\|Type=interrogative`, `Case=nominative\|Form=letter\|Gender=feminine\|POS=NUM\|Type=special`, `Case=instrumental\|Gender=feminine\|Number=plural\|POS=DET\|Type=indefinite`, `Case=locative\|Gender=neuter\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Case=nominative\|Gender=feminine\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=dative\|Number=plural\|POS=PRON\|Person=third\|Type=personal`, `Case=accusative\|Gender=neuter\|Number=singular\|POS=PROPN\|Type=proper`, `Case=genitive\|Definiteness=yes\|Degree=comparative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=instrumental\|Gender=neuter\|Number=singular\|POS=PROPN\|Type=proper`, `Case=nominative\|Gender=masculine\|Number=plural\|POS=PROPN\|Type=proper`, `Case=dative\|Gender=masculine\|Number=plural\|POS=DET\|Person=third\|Type=possessive`, `Animate=no\|Case=accusative\|Gender=masculine\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Case=genitive\|Form=letter\|Gender=masculine\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=locative\|Gender=neuter\|POS=PRON\|Person=third\|Type=indefinite`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=participle`, `Case=instrumental\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=participle`, `Case=genitive\|Gender=masculine\|Number=singular\|POS=DET\|Type=indefinite`, `Case=genitive\|Gender=feminine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=nominative\|Definiteness=yes\|Degree=comparative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=possessive`, `Case=instrumental\|Gender=neuter\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=instrumental\|Gender=neuter\|Number=plural\|POS=NOUN\|Type=common`, `Case=genitive\|Gender=masculine\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=participle`, `Case=dative\|Gender=feminine\|Number=plural\|POS=DET\|Person=first\|Type=possessive`, `Case=dative\|Gender=feminine\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=participle`, `Case=genitive\|Definiteness=yes\|Degree=comparative\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Case=instrumental\|Form=letter\|Gender=masculine\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=instrumental\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Case=dative\|Gender=neuter\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=nominative\|Gender=feminine\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=accusative\|Definiteness=yes\|Degree=comparative\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Case=dative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=general`, `Case=dative\|Gender=neuter\|Number=plural\|POS=NOUN\|Type=common`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=participle`, `Case=nominative\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Gender=masculine\|Number=plural\|POS=AUX\|Type=auxiliary\|VForm=participle`, `Case=genitive\|Form=letter\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=ordinal`, `Case=instrumental\|Form=letter\|Gender=feminine\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=accusative\|Gender=feminine\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=dative\|Gender=masculine\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=genitive\|Form=letter\|Gender=neuter\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=nominative\|Form=letter\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=locative\|Gender=feminine\|Number=plural\|POS=DET\|Person=third\|Type=possessive`, `Case=accusative\|Gender=feminine\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=locative\|Gender=feminine\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=instrumental\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=participle`, `Case=genitive\|Gender=feminine\|Number=plural\|POS=DET\|Person=third\|Type=possessive`, `Case=dative\|Form=letter\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=dative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `POS=PROPN`, `Case=instrumental\|Gender=feminine\|Number=singular\|POS=DET\|Type=reflexive`, `Case=instrumental\|Definiteness=yes\|Degree=superlative\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=genitive\|Form=letter\|Gender=masculine\|POS=NUM\|Type=special`, `Case=instrumental\|Gender=neuter\|Number=singular\|POS=DET\|Type=indefinite`, `Animate=no\|Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=participle`, `Case=locative\|Form=letter\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=ordinal`, `Case=accusative\|Form=letter\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=genitive\|Form=letter\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=accusative\|Gender=neuter\|Number=plural\|POS=DET\|Type=indefinite`, `Case=dative\|Gender=masculine\|Number=plural\|POS=PROPN\|Type=proper`, `Case=locative\|Form=letter\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=ordinal`, `Animate=no\|Case=accusative\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Gender=feminine\|Number=plural\|POS=PRON\|Person=third\|Type=personal`, `Case=accusative\|Gender=feminine\|Number=plural\|POS=DET\|Type=reflexive`, `Gender=neuter\|Number=plural\|POS=AUX\|Type=auxiliary\|VForm=participle`, `Case=instrumental\|Definiteness=yes\|Degree=comparative\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=instrumental\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=general`, `Case=locative\|Form=letter\|Gender=masculine\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=possessive`, `Number=plural\|POS=VERB\|Person=second\|Type=main\|VForm=imperative`, `Case=instrumental\|Gender=masculine\|Number=singular\|POS=DET\|Type=indefinite`, `Case=genitive\|Gender=neuter\|Number=plural\|POS=DET\|Person=third\|Type=possessive`, `Case=genitive\|Gender=masculine\|Number=plural\|POS=DET\|Type=indefinite`, `Case=instrumental\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=participle`, `Case=instrumental\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=participle`, `Case=locative\|Form=letter\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=genitive\|Gender=feminine\|Number=plural\|POS=PROPN\|Type=proper`, `Case=genitive\|Gender=feminine\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Case=instrumental\|Form=letter\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=instrumental\|Number=plural\|POS=PRON\|Person=third\|Type=personal`, `Case=accusative\|Gender=masculine\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=dative\|Gender=masculine\|Number=plural\|POS=DET\|Type=indefinite`, `Case=dative\|Gender=neuter\|Number=singular\|POS=PROPN\|Type=proper`, `Case=dative\|Gender=masculine\|Number=singular\|POS=DET\|Type=indefinite`, `Case=dative\|Gender=neuter\|Number=singular\|POS=DET\|Type=indefinite`, `Case=genitive\|Gender=masculine\|Number=plural\|POS=DET\|Person=third\|Type=possessive`, `Form=digit\|POS=SYM\|Type=special`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=participle`, `Case=genitive\|Gender=feminine\|Number=singular\|POS=DET\|Type=reflexive`, `Case=dative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=possessive`, `Animate=yes\|Case=accusative\|Gender=masculine\|Number=singular\|POS=DET\|Type=reflexive`, `Case=nominative\|Definiteness=yes\|Degree=comparative\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=accusative\|Form=letter\|Gender=masculine\|POS=NUM\|Type=cardinal`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=participle`, `Case=locative\|Gender=masculine\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=participle`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=participle`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=participle`, `Case=instrumental\|Gender=masculine\|Number=plural\|POS=PROPN\|Type=proper`, `Case=nominative\|Form=letter\|Gender=neuter\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=instrumental\|Gender=masculine\|Number=plural\|POS=DET\|Type=reflexive`, `Case=genitive\|Gender=masculine\|Number=plural\|POS=DET\|Type=reflexive`, `Case=accusative\|Gender=masculine\|Number=plural\|POS=PROPN\|Type=proper`, `Case=locative\|Definiteness=yes\|Degree=superlative\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=genitive\|Gender=neuter\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Form=digit\|POS=NUM\|Type=special`, `Case=genitive\|Form=letter\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=ordinal`, `Case=instrumental\|Gender=neuter\|Number=plural\|POS=DET\|Type=indefinite`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=possessive`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=locative\|Form=letter\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=dative\|Gender=masculine\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Case=genitive\|Gender=neuter\|Number=singular\|POS=DET\|Type=reflexive`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=participle`, `Case=accusative\|Definiteness=yes\|Degree=superlative\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=dative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=nominative\|Gender=masculine\|Number=plural\|POS=DET\|Person=first\|Type=possessive`, `Case=genitive\|Form=letter\|Gender=feminine\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=genitive\|Form=letter\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=nominative\|Definiteness=yes\|Degree=superlative\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=nominative\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Number=singular\|POS=AUX\|Person=second\|Type=auxiliary\|VForm=aorist`, `Case=dative\|Gender=masculine\|Number=plural\|POS=DET\|Type=reflexive`, `Case=genitive\|Definiteness=yes\|Degree=comparative\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Definiteness=yes\|Degree=comparative\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Case=dative\|Form=letter\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=ordinal`, `Case=genitive\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=locative\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `POS=SYM`, `Case=instrumental\|Definiteness=yes\|Degree=comparative\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Case=dative\|Definiteness=yes\|Degree=comparative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Gender=masculine\|Number=plural\|POS=DET\|Person=third\|Type=possessive`, `Case=dative\|Gender=feminine\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Animate=yes\|Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=participle`, `Case=genitive\|Form=letter\|Gender=masculine\|POS=NUM\|Type=cardinal`, `Case=dative\|Gender=masculine\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Case=genitive\|Gender=neuter\|Number=singular\|POS=DET\|Type=indefinite`, `Case=nominative\|Gender=neuter\|Number=plural\|POS=PRON\|Person=third\|Type=personal`, `Case=accusative\|Number=singular\|POS=PRON\|Person=first\|Type=personal`, `Case=vocative\|Gender=masculine\|Number=plural\|POS=NOUN\|Type=common`, `Case=instrumental\|Definiteness=yes\|Degree=comparative\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=accusative\|Form=letter\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=participle`, `Case=genitive\|Gender=neuter\|POS=PRON\|Person=third\|Type=indefinite`, `Case=genitive\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=genitive\|Definiteness=yes\|Degree=superlative\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=general`, `Form=digit\|POS=NUM\|Type=multiple`, `Case=instrumental\|Form=letter\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=ordinal`, `Case=locative\|Definiteness=yes\|Degree=comparative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=instrumental\|Gender=feminine\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=accusative\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Case=dative\|Form=letter\|Gender=feminine\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=genitive\|Gender=neuter\|Number=plural\|POS=DET\|Type=reflexive`, `Case=accusative\|Gender=feminine\|Number=plural\|POS=DET\|Person=third\|Type=possessive`, `Case=accusative\|Gender=neuter\|Number=plural\|POS=DET\|Type=reflexive`, `Animate=yes\|Case=accusative\|Form=letter\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=dative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=participle`, `Number=plural\|POS=VERB\|Person=first\|Type=main\|VForm=imperative`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=possessive`, `Animate=yes\|Case=accusative\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=genitive\|Definiteness=no\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=participle`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=participle`, `Case=instrumental\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=participle`, `Case=locative\|Number=singular\|POS=PRON\|Type=reflexive`, `Case=accusative\|Definiteness=yes\|Degree=comparative\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Case=locative\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Case=locative\|Gender=masculine\|Number=plural\|POS=DET\|Type=reflexive`, `Case=nominative\|Form=letter\|Gender=masculine\|Number=plural\|POS=NUM\|Type=cardinal`, `Case=accusative\|Form=letter\|Gender=feminine\|POS=NUM\|Type=special`, `Case=accusative\|Gender=feminine\|Number=plural\|POS=DET\|Type=interrogative`, `Case=accusative\|Gender=neuter\|POS=PRON\|Person=third\|Type=interrogative`, `Case=locative\|Form=letter\|Gender=feminine\|POS=NUM\|Type=special`, `Animate=no\|Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=locative\|Gender=feminine\|Number=plural\|POS=PROPN\|Type=proper`, `Animate=no\|Case=accusative\|Definiteness=no\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=participle`, `Animate=yes\|Case=accusative\|Gender=masculine\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Case=locative\|Definiteness=yes\|Degree=comparative\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=instrumental\|Gender=masculine\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Case=genitive\|Definiteness=no\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=locative\|Gender=masculine\|Number=plural\|POS=PROPN\|Type=proper`, `Case=dative\|Number=plural\|POS=PRON\|Person=second\|Type=personal`, `Case=accusative\|Form=letter\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=ordinal`, `Animate=yes\|Case=accusative\|Gender=masculine\|Number=singular\|POS=PRON\|Type=indefinite`, `Case=nominative\|Gender=feminine\|Number=plural\|POS=DET\|Person=first\|Type=possessive`, `Case=genitive\|Number=singular\|POS=PRON\|Type=reflexive`, `Case=genitive\|Gender=neuter\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=dative\|Number=singular\|POS=PRON\|Type=reflexive`, `Case=genitive\|Gender=feminine\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Case=locative\|Gender=feminine\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Case=locative\|Form=letter\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=ordinal`, `Case=locative\|Definiteness=yes\|Degree=comparative\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Definiteness=yes\|Degree=superlative\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=general`, `Case=dative\|Gender=neuter\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=accusative\|Gender=masculine\|POS=PRON\|Person=third\|Type=indefinite`, `Case=locative\|Form=letter\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=ordinal`, `Case=instrumental\|Gender=feminine\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Case=instrumental\|Definiteness=yes\|Degree=superlative\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=dative\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Number=singular\|POS=VERB\|Person=third\|Type=main\|VForm=aorist`, `Case=locative\|Gender=feminine\|Number=plural\|POS=DET\|Person=second\|Type=possessive`, `Case=dative\|Gender=masculine\|Number=plural\|POS=DET\|Person=first\|Type=possessive`, `Case=instrumental\|Form=letter\|Gender=feminine\|POS=NUM\|Type=special`, `Case=accusative\|Gender=feminine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `POS=ADJ`, `Case=nominative\|Gender=neuter\|Number=plural\|POS=DET\|Person=first\|Type=possessive`, `Case=locative\|Gender=masculine\|Number=plural\|POS=DET\|Person=third\|Type=possessive`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=possessive`, `Case=instrumental\|Form=letter\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=locative\|Number=plural\|POS=PRON\|Person=third\|Type=personal`, `Case=instrumental\|Form=letter\|POS=NUM\|Type=cardinal`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=genitive\|Definiteness=no\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=genitive\|Definiteness=yes\|Degree=comparative\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=instrumental\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=dative\|Gender=feminine\|Number=singular\|POS=DET\|Type=indefinite`, `Case=nominative\|Gender=feminine\|Number=plural\|POS=DET\|Person=third\|Type=possessive`, `Case=accusative\|Gender=neuter\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Case=instrumental\|Form=letter\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=ordinal`, `Case=locative\|Gender=feminine\|Number=plural\|POS=DET\|Type=reflexive`, `Case=accusative\|Form=letter\|Gender=neuter\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=possessive`, `Case=instrumental\|Gender=masculine\|Number=plural\|POS=DET\|Type=demonstrative`, `Case=instrumental\|Gender=neuter\|Number=singular\|POS=PRON\|Person=third\|Type=personal`, `Case=accusative\|Form=letter\|Gender=masculine\|POS=NUM\|Type=special`, `Case=dative\|Form=letter\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=ordinal`, `Number=singular\|POS=VERB\|Person=second\|Type=main\|VForm=imperative`, `Case=nominative\|Gender=neuter\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=accusative\|Definiteness=yes\|Degree=superlative\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=general`, `Case=nominative\|Definiteness=yes\|Degree=superlative\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Form=roman\|POS=NUM\|Type=cardinal`, `Case=instrumental\|Gender=feminine\|Number=plural\|POS=DET\|Type=reflexive`, `Case=nominative\|Gender=feminine\|Number=singular\|POS=DET\|Person=second\|Type=possessive`, `Case=genitive\|Gender=feminine\|Number=singular\|POS=DET\|Person=second\|Type=possessive`, `Case=dative\|Gender=feminine\|Number=plural\|POS=DET\|Type=indefinite`, `Case=dative\|Form=letter\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=ordinal`, `Case=instrumental\|Gender=feminine\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=possessive`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=participle`, `Case=dative\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Form=letter\|Gender=masculine\|Number=plural\|POS=NUM\|Type=special`, `Case=locative\|Form=letter\|Gender=feminine\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=instrumental\|Form=letter\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=accusative\|Number=plural\|POS=PRON\|Person=second\|Type=personal`, `Case=genitive\|Number=plural\|POS=PRON\|Person=first\|Type=personal`, `Case=instrumental\|Gender=neuter\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Case=dative\|Gender=masculine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=locative\|Form=letter\|POS=NUM\|Type=cardinal`, `Case=locative\|Gender=neuter\|Number=singular\|POS=DET\|Type=reflexive`, `Case=instrumental\|Definiteness=yes\|Degree=comparative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=dative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=participle`, `Case=accusative\|Definiteness=yes\|Degree=superlative\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=nominative\|Definiteness=yes\|Degree=comparative\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=general`, `Case=dative\|Gender=masculine\|Number=singular\|POS=DET\|Type=reflexive`, `Case=dative\|Gender=feminine\|Number=singular\|POS=DET\|Type=reflexive`, `Case=genitive\|Gender=masculine\|POS=PRON\|Person=third\|Type=indefinite`, `Animate=no\|Case=accusative\|Gender=masculine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Form=roman\|POS=ADJ\|Type=ordinal`, `Case=dative\|Definiteness=no\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=genitive\|Gender=neuter\|Number=plural\|POS=PROPN\|Type=proper`, `Case=nominative\|Gender=feminine\|Number=plural\|POS=DET\|Type=reflexive`, `Case=locative\|Gender=feminine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=locative\|Form=letter\|Gender=masculine\|POS=NUM\|Type=cardinal`, `Number=plural\|POS=AUX\|Person=second\|Type=auxiliary\|VForm=aorist`, `Case=instrumental\|Gender=masculine\|Number=singular\|POS=DET\|Person=third\|Type=possessive`, `Case=genitive\|POS=SYM`, `Case=locative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=possessive`, `Case=nominative\|Gender=masculine\|POS=PRON\|Person=third\|Type=interrogative`, `Case=locative\|Definiteness=no\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `POS=PART`, `Case=locative\|Definiteness=yes\|Degree=comparative\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=instrumental\|Number=plural\|POS=PRON\|Person=first\|Type=personal`, `Case=genitive\|Form=letter\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=dative\|Definiteness=yes\|Degree=superlative\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=possessive`, `Case=dative\|Gender=masculine\|POS=PRON\|Person=third\|Type=interrogative`, `Case=instrumental\|Definiteness=no\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `POS=INTJ`, `Case=locative\|Gender=neuter\|POS=PRON\|Person=third\|Type=interrogative`, `Case=nominative\|Gender=masculine\|Number=singular\|POS=DET\|Person=second\|Type=possessive`, `POS=PART\|Type=affirmative`, `Number=singular\|POS=VERB\|Person=second\|Type=main\|VForm=present`, `Case=dative\|Number=singular\|POS=PRON\|Person=second\|Type=personal`, `Case=dative\|Gender=masculine\|Number=singular\|POS=DET\|Person=second\|Type=possessive`, `Animate=yes\|Case=accusative\|Gender=masculine\|Number=singular\|POS=DET\|Type=demonstrative`, `Case=locative\|Definiteness=no\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=instrumental\|Gender=masculine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=genitive\|Gender=masculine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=accusative\|Gender=neuter\|Number=singular\|POS=DET\|Person=second\|Type=possessive`, `Case=nominative\|Gender=neuter\|Number=plural\|POS=DET\|Person=second\|Type=possessive`, `Case=accusative\|Gender=feminine\|Number=singular\|POS=DET\|Type=interrogative`, `Case=dative\|Definiteness=yes\|Degree=comparative\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=nominative\|Gender=feminine\|Number=plural\|POS=DET\|Person=second\|Type=possessive`, `Case=dative\|Gender=neuter\|Number=plural\|POS=DET\|Type=indefinite`, `Case=locative\|Gender=neuter\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=instrumental\|Gender=masculine\|POS=PRON\|Person=third\|Type=indefinite`, `Case=locative\|Definiteness=yes\|Degree=comparative\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Animate=yes\|Case=accusative\|Definiteness=no\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=dative\|Definiteness=yes\|Degree=comparative\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=general`, `Case=accusative\|Gender=feminine\|Number=plural\|POS=DET\|Person=first\|Type=possessive`, `Case=nominative\|Gender=neuter\|Number=plural\|POS=DET\|Person=third\|Type=possessive`, `Case=dative\|Gender=neuter\|Number=plural\|POS=DET\|Person=first\|Type=possessive`, `Case=nominative\|Gender=masculine\|Number=singular\|POS=DET\|Type=interrogative`, `Case=instrumental\|Gender=neuter\|Number=singular\|POS=DET\|Type=reflexive`, `Case=accusative\|Gender=masculine\|Number=plural\|POS=DET\|Person=first\|Type=possessive`, `Case=nominative\|Form=letter\|Gender=neuter\|POS=NUM\|Type=special`, `Case=locative\|Form=letter\|Gender=masculine\|Number=plural\|POS=NUM\|Type=cardinal`, `Case=accusative\|Number=singular\|POS=PRON\|Person=second\|Type=personal`, `Case=locative\|Number=singular\|POS=PRON\|Person=first\|Type=personal`, `Number=singular\|POS=AUX\|Person=second\|Type=auxiliary\|VForm=present`, `Case=vocative\|Gender=neuter\|Number=singular\|POS=NOUN\|Type=common`, `Case=genitive\|Number=singular\|POS=PRON\|Person=first\|Type=personal`, `Animate=yes\|Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=vocative\|Gender=feminine\|Number=singular\|POS=NOUN\|Type=common`, `Case=locative\|Form=letter\|Gender=neuter\|Number=singular\|POS=NUM\|Type=cardinal`, `Case=vocative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=vocative\|Form=letter\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=instrumental\|Gender=masculine\|Number=plural\|POS=DET\|Person=third\|Type=possessive`, `Case=locative\|Gender=feminine\|Number=plural\|POS=DET\|Person=first\|Type=possessive`, `Case=instrumental\|Gender=masculine\|Number=plural\|POS=DET\|Person=first\|Type=possessive`, `Case=vocative\|Gender=feminine\|Number=plural\|POS=NOUN\|Type=common`, `Case=nominative\|Number=singular\|POS=PRON\|Person=second\|Type=personal`, `Case=genitive\|Number=plural\|POS=PRON\|Person=second\|Type=personal`, `Case=locative\|Gender=masculine\|Number=singular\|POS=DET\|Person=second\|Type=possessive`, `Number=plural\|POS=AUX\|Person=second\|Type=auxiliary\|VForm=imperative`, `Case=locative\|Definiteness=yes\|Degree=superlative\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=general`, `Case=instrumental\|Gender=feminine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=accusative\|Definiteness=yes\|Degree=superlative\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=accusative\|Gender=feminine\|Number=plural\|POS=PROPN\|Type=proper`, `Case=genitive\|Definiteness=no\|Degree=positive\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=accusative\|Definiteness=yes\|Degree=comparative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=locative\|Number=singular\|POS=PRON\|Person=second\|Type=personal`, `Case=instrumental\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=possessive`, `Case=vocative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Animate=yes\|Case=accusative\|Gender=masculine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=accusative\|Definiteness=yes\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=accusative\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Number=singular\|POS=AUX\|Person=third\|Type=auxiliary\|VForm=imperfect`, `Case=accusative\|Gender=feminine\|Number=singular\|POS=DET\|Person=second\|Type=possessive`, `Case=genitive\|Definiteness=no\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=participle`, `Case=nominative\|Form=letter\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=ordinal`, `Case=genitive\|Gender=masculine\|Number=singular\|POS=DET\|Person=second\|Type=possessive`, `Case=dative\|Form=letter\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=ordinal`, `Case=locative\|Definiteness=yes\|Degree=superlative\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `POS=ADV`, `Case=locative\|Form=letter\|Gender=masculine\|POS=NUM\|Type=special`, `Case=nominative\|Gender=masculine\|Number=plural\|POS=DET\|Person=second\|Type=possessive`, `Case=vocative\|Gender=masculine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=vocative\|Gender=masculine\|Number=singular\|POS=PROPN\|Type=proper`, `Case=accusative\|Gender=feminine\|Number=plural\|POS=DET\|Person=second\|Type=possessive`, `Case=dative\|Gender=feminine\|Number=singular\|POS=DET\|Person=second\|Type=possessive`, `Case=instrumental\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=possessive`, `Case=nominative\|Form=letter\|Gender=masculine\|POS=NUM\|Type=special`, `Case=nominative\|Gender=neuter\|Number=plural\|POS=PROPN\|Type=proper`, `Animate=no\|Case=accusative\|Gender=masculine\|Number=singular\|POS=DET\|Person=second\|Type=possessive`, `Case=vocative\|Definiteness=yes\|Degree=positive\|Gender=feminine\|Number=singular\|POS=ADJ\|Type=general`, `Case=vocative\|Gender=feminine\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=dative\|Definiteness=yes\|Degree=positive\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=participle`, `Case=genitive\|Number=singular\|POS=PRON\|Person=second\|Type=personal`, `Case=instrumental\|Gender=masculine\|Number=singular\|POS=DET\|Person=second\|Type=possessive`, `Case=nominative\|Form=letter\|Gender=masculine\|POS=NUM\|Type=cardinal`, `Case=accusative\|Definiteness=no\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=instrumental\|Gender=neuter\|Number=singular\|POS=DET\|Person=second\|Type=possessive`, `Case=genitive\|Gender=masculine\|Number=plural\|POS=DET\|Person=second\|Type=possessive`, `Case=instrumental\|Definiteness=yes\|Degree=superlative\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Animate=yes\|Case=accusative\|Definiteness=no\|Degree=superlative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=locative\|Definiteness=no\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=nominative\|Gender=feminine\|Number=plural\|POS=DET\|Type=interrogative`, `Case=dative\|Definiteness=yes\|Degree=comparative\|Gender=neuter\|Number=singular\|POS=ADJ\|Type=general`, `Case=instrumental\|Number=singular\|POS=PRON\|Person=first\|Type=personal`, `Case=instrumental\|Gender=neuter\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=vocative\|Number=singular\|POS=PRON\|Person=second\|Type=personal`, `Animate=yes\|Case=accusative\|Definiteness=no\|Degree=positive\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=participle`, `Case=dative\|Form=letter\|Gender=feminine\|POS=NUM\|Type=cardinal`, `Case=dative\|Definiteness=yes\|Degree=superlative\|Gender=feminine\|Number=plural\|POS=ADJ\|Type=general`, `Case=accusative\|Gender=masculine\|Number=plural\|POS=DET\|Person=second\|Type=possessive`, `Case=instrumental\|Gender=neuter\|Number=plural\|POS=DET\|Type=reflexive`, `Case=dative\|Gender=neuter\|POS=PRON\|Person=third\|Type=indefinite`, `Case=vocative\|Definiteness=yes\|Degree=superlative\|Gender=masculine\|Number=plural\|POS=ADJ\|Type=general`, `Case=vocative\|Gender=masculine\|Number=plural\|POS=DET\|Person=first\|Type=possessive`, `Animate=yes\|Case=accusative\|Definiteness=no\|Degree=comparative\|Gender=masculine\|Number=singular\|POS=ADJ\|Type=general`, `Case=accusative\|Gender=neuter\|Number=singular\|POS=DET\|Person=first\|Type=possessive`, `Case=accusative\|Definiteness=yes\|Degree=comparative\|Gender=neuter\|Number=plural\|POS=ADJ\|Type=general`, `Case=nominative\|Gender=feminine\|Number=singular\|POS=DET\|Type=interrogative`, `Case=locative\|Gender=masculine\|Number=plural\|POS=DET\|Person=second\|Type=possessive`, `Case=instrumental\|Gender=feminine\|Number=singular\|POS=DET\|Type=interrogative`, `Case=genitive\|Gender=neuter\|Number=singular\|POS=DET\|Person=second\|Type=possessive` | | **`tagger`** | `Agcfpay`, `Agcfpdy`, `Agcfpgy`, `Agcfpiy`, `Agcfply`, `Agcfpny`, `Agcfsay`, `Agcfsdy`, `Agcfsgy`, `Agcfsiy`, `Agcfsly`, `Agcfsny`, `Agcmpay`, `Agcmpgy`, `Agcmpiy`, `Agcmply`, `Agcmpny`, `Agcmsany`, `Agcmsay`, `Agcmsayn`, `Agcmsdy`, `Agcmsgy`, `Agcmsiy`, `Agcmsly`, `Agcmsny`, `Agcnpay`, `Agcnpdy`, `Agcnpgy`, `Agcnpny`, `Agcnsay`, `Agcnsdy`, `Agcnsgy`, `Agcnsiy`, `Agcnsly`, `Agcnsny`, `Agpfpay`, `Agpfpdy`, `Agpfpgy`, `Agpfpiy`, `Agpfply`, `Agpfpny`, `Agpfsay`, `Agpfsdy`, `Agpfsgy`, `Agpfsin`, `Agpfsiy`, `Agpfsly`, `Agpfsny`, `Agpfsvy`, `Agpmpay`, `Agpmpdy`, `Agpmpgy`, `Agpmpiy`, `Agpmply`, `Agpmpny`, `Agpmpvy`, `Agpmsan`, `Agpmsann`, `Agpmsany`, `Agpmsay`, `Agpmsayn`, `Agpmsayy`, `Agpmsdy`, `Agpmsgn`, `Agpmsgy`, `Agpmsiy`, `Agpmsln`, `Agpmsly`, `Agpmsnn`, `Agpmsny`, `Agpmsvy`, `Agpnpay`, `Agpnpdy`, `Agpnpgy`, `Agpnpiy`, `Agpnply`, `Agpnpny`, `Agpnsay`, `Agpnsdy`, `Agpnsgn`, `Agpnsgy`, `Agpnsiy`, `Agpnsln`, `Agpnsly`, `Agpnsny`, `Agsfpay`, `Agsfpdy`, `Agsfpgy`, `Agsfpiy`, `Agsfply`, `Agsfpny`, `Agsfsay`, `Agsfsgy`, `Agsfsiy`, `Agsfsly`, `Agsfsny`, `Agsmpay`, `Agsmpdy`, `Agsmpgy`, `Agsmpiy`, `Agsmply`, `Agsmpny`, `Agsmpvy`, `Agsmsany`, `Agsmsay`, `Agsmsayn`, `Agsmsayy`, `Agsmsdy`, `Agsmsgy`, `Agsmsiy`, `Agsmsly`, `Agsmsny`, `Agsnpay`, `Agsnpgy`, `Agsnply`, `Agsnpny`, `Agsnsay`, `Agsnsdy`, `Agsnsiy`, `Agsnsly`, `Agsnsny`, `Appfpay`, `Appfpdy`, `Appfpgy`, `Appfpiy`, `Appfply`, `Appfpny`, `Appfsay`, `Appfsgy`, `Appfsiy`, `Appfsly`, `Appfsny`, `Appmpay`, `Appmpdy`, `Appmpgy`, `Appmpiy`, `Appmply`, `Appmpny`, `Appmsann`, `Appmsany`, `Appmsayn`, `Appmsayy`, `Appmsdy`, `Appmsgn`, `Appmsgy`, `Appmsiy`, `Appmsly`, `Appmsnn`, `Appmsny`, `Appnpay`, `Appnpdy`, `Appnpgy`, `Appnpiy`, `Appnply`, `Appnpny`, `Appnsay`, `Appnsgy`, `Appnsly`, `Appnsny`, `Aspfpay`, `Aspfpgy`, `Aspfpiy`, `Aspfply`, `Aspfpny`, `Aspfsay`, `Aspfsdy`, `Aspfsgy`, `Aspfsiy`, `Aspfsly`, `Aspfsny`, `Aspmpay`, `Aspmpgy`, `Aspmply`, `Aspmpny`, `Aspmsayn`, `Aspmsayy`, `Aspmsdy`, `Aspmsgn`, `Aspmsgy`, `Aspmsiy`, `Aspmsln`, `Aspmsly`, `Aspmsnn`, `Aspnpay`, `Aspnpgy`, `Aspnpny`, `Aspnsay`, `Aspnsdn`, `Aspnsgn`, `Aspnsgy`, `Aspnsly`, `Aspnsny`, `Cc`, `Cs`, `I`, `Mdc`, `Mdm`, `Mdo`, `Mds`, `Mlc`, `Mlc--g`, `Mlc--i`, `Mlc--l`, `Mlcf-a`, `Mlcf-d`, `Mlcf-g`, `Mlcf-n`, `Mlcfsa`, `Mlcfsd`, `Mlcfsg`, `Mlcfsi`, `Mlcfsl`, `Mlcfsn`, `Mlcm-a`, `Mlcm-g`, `Mlcm-l`, `Mlcm-n`, `Mlcmpl`, `Mlcmpn`, `Mlcmsan`, `Mlcmsay`, `Mlcmsg`, `Mlcmsi`, `Mlcmsl`, `Mlcmsn`, `Mlcn-n`, `Mlcnsa`, `Mlcnsg`, `Mlcnsl`, `Mlcnsn`, `Mlofpa`, `Mlofpd`, `Mlofpg`, `Mlofpi`, `Mlofpl`, `Mlofpn`, `Mlofsa`, `Mlofsd`, `Mlofsg`, `Mlofsi`, `Mlofsl`, `Mlofsn`, `Mlompa`, `Mlompd`, `Mlompg`, `Mlompi`, `Mlompl`, `Mlompn`, `Mlomsan`, `Mlomsay`, `Mlomsd`, `Mlomsg`, `Mlomsi`, `Mlomsl`, `Mlomsn`, `Mlomsv`, `Mlonpa`, `Mlonpg`, `Mlonpl`, `Mlonpn`, `Mlonsa`, `Mlonsd`, `Mlonsg`, `Mlonsi`, `Mlonsl`, `Mlonsn`, `Mls`, `Mlsf-a`, `Mlsf-d`, `Mlsf-g`, `Mlsf-i`, `Mlsf-l`, `Mlsf-n`, `Mlsm-a`, `Mlsm-g`, `Mlsm-l`, `Mlsm-n`, `Mlsmpn`, `Mlsn-n`, `Mrc`, `Mro`, `Ncfpa`, `Ncfpd`, `Ncfpg`, `Ncfpi`, `Ncfpl`, `Ncfpn`, `Ncfpv`, `Ncfsa`, `Ncfsd`, `Ncfsg`, `Ncfsi`, `Ncfsl`, `Ncfsn`, `Ncfsv`, `Ncmpa`, `Ncmpd`, `Ncmpg`, `Ncmpi`, `Ncmpl`, `Ncmpn`, `Ncmpv`, `Ncmsan`, `Ncmsay`, `Ncmsd`, `Ncmsg`, `Ncmsi`, `Ncmsl`, `Ncmsn`, `Ncmsv`, `Ncnpa`, `Ncnpd`, `Ncnpg`, `Ncnpi`, `Ncnpl`, `Ncnpn`, `Ncnsa`, `Ncnsd`, `Ncnsg`, `Ncnsi`, `Ncnsl`, `Ncnsn`, `Ncnsv`, `Npfpa`, `Npfpg`, `Npfpl`, `Npfpn`, `Npfsa`, `Npfsd`, `Npfsg`, `Npfsi`, `Npfsl`, `Npfsn`, `Npmpa`, `Npmpd`, `Npmpg`, `Npmpi`, `Npmpl`, `Npmpn`, `Npmsan`, `Npmsay`, `Npmsd`, `Npmsg`, `Npmsi`, `Npmsl`, `Npmsn`, `Npmsv`, `Npnpg`, `Npnpn`, `Npnsa`, `Npnsd`, `Npnsg`, `Npnsi`, `Npnsl`, `Npnsn`, `Pd-fpa`, `Pd-fpd`, `Pd-fpg`, `Pd-fpi`, `Pd-fpl`, `Pd-fpn`, `Pd-fsa`, `Pd-fsd`, `Pd-fsg`, `Pd-fsi`, `Pd-fsl`, `Pd-fsn`, `Pd-mpa`, `Pd-mpd`, `Pd-mpg`, `Pd-mpi`, `Pd-mpl`, `Pd-mpn`, `Pd-msan`, `Pd-msay`, `Pd-msd`, `Pd-msg`, `Pd-msi`, `Pd-msl`, `Pd-msn`, `Pd-npa`, `Pd-npd`, `Pd-npg`, `Pd-npi`, `Pd-npn`, `Pd-nsa`, `Pd-nsd`, `Pd-nsg`, `Pd-nsi`, `Pd-nsl`, `Pd-nsn`, `Pi-fpa`, `Pi-fpd`, `Pi-fpg`, `Pi-fpi`, `Pi-fpl`, `Pi-fpn`, `Pi-fsa`, `Pi-fsd`, `Pi-fsg`, `Pi-fsi`, `Pi-fsl`, `Pi-fsn`, `Pi-mpa`, `Pi-mpd`, `Pi-mpg`, `Pi-mpi`, `Pi-mpl`, `Pi-mpn`, `Pi-msan`, `Pi-msay`, `Pi-msd`, `Pi-msg`, `Pi-msi`, `Pi-msl`, `Pi-msn`, `Pi-npa`, `Pi-npd`, `Pi-npg`, `Pi-npi`, `Pi-npl`, `Pi-npn`, `Pi-nsa`, `Pi-nsd`, `Pi-nsg`, `Pi-nsi`, `Pi-nsl`, `Pi-nsn`, `Pi3m-a`, `Pi3m-d`, `Pi3m-g`, `Pi3m-i`, `Pi3m-n`, `Pi3n-a`, `Pi3n-d`, `Pi3n-g`, `Pi3n-i`, `Pi3n-l`, `Pi3n-n`, `Pp1-pa`, `Pp1-pd`, `Pp1-pg`, `Pp1-pi`, `Pp1-pl`, `Pp1-pn`, `Pp1-sa`, `Pp1-sd`, `Pp1-sg`, `Pp1-si`, `Pp1-sl`, `Pp1-sn`, `Pp2-pa`, `Pp2-pd`, `Pp2-pg`, `Pp2-pn`, `Pp2-sa`, `Pp2-sd`, `Pp2-sg`, `Pp2-sl`, `Pp2-sn`, `Pp2-sv`, `Pp3-pa`, `Pp3-pd`, `Pp3-pg`, `Pp3-pi`, `Pp3-pl`, `Pp3fpn`, `Pp3fsa`, `Pp3fsd`, `Pp3fsg`, `Pp3fsi`, `Pp3fsl`, `Pp3fsn`, `Pp3mpn`, `Pp3msa`, `Pp3msd`, `Pp3msg`, `Pp3msi`, `Pp3msl`, `Pp3msn`, `Pp3npn`, `Pp3nsa`, `Pp3nsi`, `Pp3nsn`, `Pq-fpa`, `Pq-fpn`, `Pq-fsa`, `Pq-fsi`, `Pq-fsl`, `Pq-fsn`, `Pq-mpn`, `Pq-msn`, `Pq-nsn`, `Pq3m-d`, `Pq3m-n`, `Pq3n-a`, `Pq3n-l`, `Pq3n-n`, `Ps1fpa`, `Ps1fpd`, `Ps1fpg`, `Ps1fpl`, `Ps1fpn`, `Ps1fsa`, `Ps1fsd`, `Ps1fsg`, `Ps1fsi`, `Ps1fsl`, `Ps1fsn`, `Ps1fsv`, `Ps1mpa`, `Ps1mpd`, `Ps1mpg`, `Ps1mpi`, `Ps1mpl`, `Ps1mpn`, `Ps1mpv`, `Ps1msan`, `Ps1msay`, `Ps1msd`, `Ps1msg`, `Ps1msi`, `Ps1msl`, `Ps1msn`, `Ps1msv`, `Ps1npd`, `Ps1npn`, `Ps1nsa`, `Ps1nsg`, `Ps1nsi`, `Ps1nsl`, `Ps1nsn`, `Ps2fpa`, `Ps2fpl`, `Ps2fpn`, `Ps2fsa`, `Ps2fsd`, `Ps2fsg`, `Ps2fsn`, `Ps2mpa`, `Ps2mpg`, `Ps2mpl`, `Ps2mpn`, `Ps2msan`, `Ps2msd`, `Ps2msg`, `Ps2msi`, `Ps2msl`, `Ps2msn`, `Ps2npn`, `Ps2nsa`, `Ps2nsg`, `Ps2nsi`, `Ps2nsl`, `Ps2nsn`, `Ps3fpa`, `Ps3fpg`, `Ps3fpl`, `Ps3fpn`, `Ps3fsa`, `Ps3fsd`, `Ps3fsg`, `Ps3fsi`, `Ps3fsl`, `Ps3fsn`, `Ps3mpa`, `Ps3mpd`, `Ps3mpg`, `Ps3mpi`, `Ps3mpl`, `Ps3mpn`, `Ps3msan`, `Ps3msay`, `Ps3msd`, `Ps3msg`, `Ps3msi`, `Ps3msl`, `Ps3msn`, `Ps3npa`, `Ps3npg`, `Ps3npl`, `Ps3npn`, `Ps3nsa`, `Ps3nsg`, `Ps3nsi`, `Ps3nsl`, `Ps3nsn`, `Px--sa`, `Px--sd`, `Px--sg`, `Px--si`, `Px--sl`, `Px-fpa`, `Px-fpg`, `Px-fpi`, `Px-fpl`, `Px-fpn`, `Px-fsa`, `Px-fsd`, `Px-fsg`, `Px-fsi`, `Px-fsl`, `Px-mpa`, `Px-mpd`, `Px-mpg`, `Px-mpi`, `Px-mpl`, `Px-msan`, `Px-msay`, `Px-msd`, `Px-msg`, `Px-msi`, `Px-msl`, `Px-npa`, `Px-npg`, `Px-npi`, `Px-npl`, `Px-nsa`, `Px-nsg`, `Px-nsi`, `Px-nsl`, `Qo`, `Qq`, `Qr`, `Qz`, `Rgc`, `Rgp`, `Rgs`, `Rr`, `Sa`, `Sd`, `Sg`, `Si`, `Sl`, `Vaa1p`, `Vaa1s`, `Vaa2p`, `Vaa2s`, `Vaa3p`, `Vaa3s`, `Vae3s`, `Vam2p`, `Van`, `Vap-pf`, `Vap-pm`, `Vap-pn`, `Vap-sf`, `Vap-sm`, `Vap-sn`, `Var1p`, `Var1s`, `Var2p`, `Var2s`, `Var3p`, `Var3s`, `Vma3s`, `Vmm1p`, `Vmm2p`, `Vmm2s`, `Vmn`, `Vmp-pf`, `Vmp-pm`, `Vmp-pn`, `Vmp-sf`, `Vmp-sm`, `Vmp-sn`, `Vmr1p`, `Vmr1s`, `Vmr2p`, `Vmr2s`, `Vmr3p`, `Vmr3s`, `X`, `Xf`, `Y`, `Z` | | **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `dep`, `det`, `discourse`, `expl`, `fixed`, `flat`, `goeswith`, `iobj`, `mark`, `nmod`, `nsubj`, `nummod`, `obj`, `obl`, `orphan`, `parataxis`, `punct`, `xcomp` | </details> ### Accuracy | Type | Score | | --- | --- | | `POS_ACC` | 97.94 | | `MORPH_ACC` | 93.45 | | `TAG_ACC` | 93.42 | | `DEP_UAS` | 88.33 | | `DEP_LAS` | 82.92 | | `SENTS_P` | 96.67 | | `SENTS_R` | 96.83 | | `SENTS_F` | 96.75 | | `TRANSFORMER_LOSS` | 3301725.19 | | `MORPHOLOGIZER_LOSS` | 410128.51 | | `TAGGER_LOSS` | 393243.89 | | `PARSER_LOSS` | 3074279.42 |
{"language": ["hr"], "license": "cc", "library_name": "spacy", "tags": ["spacy", "token-classification"], "datasets": ["classla/hr500k"], "metrics": ["f1", "accuracy"], "pipeline_tag": "token-classification"}
danielvasic/hr_hroberta_pipeline
null
[ "spacy", "token-classification", "hr", "dataset:classla/hr500k", "license:cc", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "hr" ]
TAGS #spacy #token-classification #hr #dataset-classla/hr500k #license-cc #model-index #region-us
### Label Scheme View label scheme (1392 labels for 3 components) ### Accuracy
[ "### Label Scheme\n\n\n\nView label scheme (1392 labels for 3 components)", "### Accuracy" ]
[ "TAGS\n#spacy #token-classification #hr #dataset-classla/hr500k #license-cc #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (1392 labels for 3 components)", "### Accuracy" ]
text-generation
transformers
# Michael Scott DialoGPT Model
{"tags": ["conversational"]}
danildany/DialoGPT-small-MichaelScott
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Michael Scott DialoGPT Model
[ "# Michael Scott DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Michael Scott DialoGPT Model" ]
multiple-choice
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-xxlarge-v2-finetuned-csqa-ih This model is a fine-tuned version of [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) on an unkown dataset. It achieves the following results on the evaluation set: - Loss: 1.5694 - Accuracy: 0.8026 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.8032 | 1.0 | 532 | 0.5217 | 0.8043 | | 0.3182 | 2.0 | 1064 | 0.6313 | 0.7985 | | 0.0668 | 3.0 | 1596 | 1.2971 | 0.7969 | | 0.0131 | 4.0 | 2128 | 1.4671 | 0.8026 | | 0.0046 | 5.0 | 2660 | 1.5694 | 0.8026 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0 - Datasets 1.10.2 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model_index": {"name": "albert-xxlarge-v2-finetuned-csqa-ih"}}
danlou/albert-xxlarge-v2-finetuned-csqa-ih
null
[ "transformers", "pytorch", "albert", "multiple-choice", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #albert #multiple-choice #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
albert-xxlarge-v2-finetuned-csqa-ih =================================== This model is a fine-tuned version of albert-xxlarge-v2 on an unkown dataset. It achieves the following results on the evaluation set: * Loss: 1.5694 * Accuracy: 0.8026 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.8.2 * Pytorch 1.9.0 * Datasets 1.10.2 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.9.0\n* Datasets 1.10.2\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #albert #multiple-choice #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.9.0\n* Datasets 1.10.2\n* Tokenizers 0.10.3" ]
multiple-choice
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-xxlarge-v2-finetuned-csqa This model is a fine-tuned version of [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) on the commonsense_qa dataset. It achieves the following results on the evaluation set: - Loss: 1.6177 - Accuracy: 0.7871 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7464 | 1.0 | 609 | 0.5319 | 0.7985 | | 0.3116 | 2.0 | 1218 | 0.6422 | 0.7936 | | 0.0769 | 3.0 | 1827 | 1.2674 | 0.7952 | | 0.0163 | 4.0 | 2436 | 1.4839 | 0.7903 | | 0.0122 | 5.0 | 3045 | 1.6177 | 0.7871 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0 - Datasets 1.10.2 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["commonsense_qa"], "metrics": ["accuracy"], "model_index": [{"name": "albert-xxlarge-v2-finetuned-csqa", "results": [{"dataset": {"name": "commonsense_qa", "type": "commonsense_qa", "args": "default"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.7870597839355469}}]}]}
danlou/albert-xxlarge-v2-finetuned-csqa
null
[ "transformers", "pytorch", "albert", "multiple-choice", "generated_from_trainer", "dataset:commonsense_qa", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #albert #multiple-choice #generated_from_trainer #dataset-commonsense_qa #license-apache-2.0 #endpoints_compatible #region-us
albert-xxlarge-v2-finetuned-csqa ================================ This model is a fine-tuned version of albert-xxlarge-v2 on the commonsense\_qa dataset. It achieves the following results on the evaluation set: * Loss: 1.6177 * Accuracy: 0.7871 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.8.2 * Pytorch 1.9.0 * Datasets 1.10.2 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.9.0\n* Datasets 1.10.2\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #albert #multiple-choice #generated_from_trainer #dataset-commonsense_qa #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.9.0\n* Datasets 1.10.2\n* Tokenizers 0.10.3" ]
multiple-choice
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # aristo-roberta-finetuned-csqa This model is a fine-tuned version of [LIAMF-USP/aristo-roberta](https://huggingface.co/LIAMF-USP/aristo-roberta) on the commonsense_qa dataset. It achieves the following results on the evaluation set: - Loss: 1.2187 - Accuracy: 0.7305 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.131 | 1.0 | 609 | 0.7109 | 0.7232 | | 0.6957 | 2.0 | 1218 | 0.6912 | 0.7346 | | 0.459 | 3.0 | 1827 | 0.8364 | 0.7305 | | 0.3063 | 4.0 | 2436 | 1.0595 | 0.7322 | | 0.2283 | 5.0 | 3045 | 1.2187 | 0.7305 | ### Framework versions - Transformers 4.9.0 - Pytorch 1.9.0 - Datasets 1.10.2 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["commonsense_qa"], "metrics": ["accuracy"], "model_index": [{"name": "aristo-roberta-finetuned-csqa", "results": [{"dataset": {"name": "commonsense_qa", "type": "commonsense_qa", "args": "default"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.7305487394332886}}]}]}
danlou/aristo-roberta-finetuned-csqa
null
[ "transformers", "pytorch", "roberta", "multiple-choice", "generated_from_trainer", "dataset:commonsense_qa", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #roberta #multiple-choice #generated_from_trainer #dataset-commonsense_qa #license-mit #endpoints_compatible #region-us
aristo-roberta-finetuned-csqa ============================= This model is a fine-tuned version of LIAMF-USP/aristo-roberta on the commonsense\_qa dataset. It achieves the following results on the evaluation set: * Loss: 1.2187 * Accuracy: 0.7305 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.9.0 * Pytorch 1.9.0 * Datasets 1.10.2 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.9.0\n* Pytorch 1.9.0\n* Datasets 1.10.2\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #roberta #multiple-choice #generated_from_trainer #dataset-commonsense_qa #license-mit #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.9.0\n* Pytorch 1.9.0\n* Datasets 1.10.2\n* Tokenizers 0.10.3" ]
text-classification
transformers
Testing
{}
danlou/distilbert-base-uncased-finetuned-rte
null
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us
Testing
[]
[ "TAGS\n#transformers #pytorch #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n" ]
multiple-choice
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-finetuned-csqa This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the commonsense_qa dataset. It achieves the following results on the evaluation set: - Loss: 0.9146 - Accuracy: 0.7330 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.3903 | 1.0 | 609 | 0.8845 | 0.6642 | | 0.8939 | 2.0 | 1218 | 0.7054 | 0.7281 | | 0.6163 | 3.0 | 1827 | 0.7452 | 0.7314 | | 0.4245 | 4.0 | 2436 | 0.8369 | 0.7355 | | 0.3258 | 5.0 | 3045 | 0.9146 | 0.7330 | ### Framework versions - Transformers 4.9.0 - Pytorch 1.9.0 - Datasets 1.10.2 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["commonsense_qa"], "metrics": ["accuracy"], "model_index": [{"name": "roberta-large-finetuned-csqa", "results": [{"dataset": {"name": "commonsense_qa", "type": "commonsense_qa", "args": "default"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.7330057621002197}}]}]}
danlou/roberta-large-finetuned-csqa
null
[ "transformers", "pytorch", "roberta", "multiple-choice", "generated_from_trainer", "dataset:commonsense_qa", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #roberta #multiple-choice #generated_from_trainer #dataset-commonsense_qa #license-mit #endpoints_compatible #region-us
roberta-large-finetuned-csqa ============================ This model is a fine-tuned version of roberta-large on the commonsense\_qa dataset. It achieves the following results on the evaluation set: * Loss: 0.9146 * Accuracy: 0.7330 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.9.0 * Pytorch 1.9.0 * Datasets 1.10.2 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.9.0\n* Pytorch 1.9.0\n* Datasets 1.10.2\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #roberta #multiple-choice #generated_from_trainer #dataset-commonsense_qa #license-mit #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.9.0\n* Pytorch 1.9.0\n* Datasets 1.10.2\n* Tokenizers 0.10.3" ]
text-generation
transformers
#datnguyen
{"tags": ["conversational"]}
danny481/DialoGPT-small-datnguyenchatbot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#datnguyen
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
#Harry Potter DialoGPT
{"tags": ["conversational"]}
danny481/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#Harry Potter DialoGPT
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
#ChatBot updated by datng
{"tags": ["conversational"]}
danny481/Final_ChatBot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#ChatBot updated by datng
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]