pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
listlengths
0
201
languages
listlengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
listlengths
0
722
processed_texts
listlengths
1
723
token-classification
transformers
This model was created using xlm-roberta-base bodel and fine-tuned it using CoNLL 2003 dataset. On top of the trained model, we trained it again using a Sinhala NER data that was also formatted to the CoNLL format.
{}
asanka25/xlm-roberta-base-finetuned-conll03-english-finetuned-sinhala
null
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #xlm-roberta #token-classification #autotrain_compatible #endpoints_compatible #region-us
This model was created using xlm-roberta-base bodel and fine-tuned it using CoNLL 2003 dataset. On top of the trained model, we trained it again using a Sinhala NER data that was also formatted to the CoNLL format.
[]
[ "TAGS\n#transformers #pytorch #xlm-roberta #token-classification #autotrain_compatible #endpoints_compatible #region-us \n" ]
sentence-similarity
sentence-transformers
# recobo/agri-sentence-transformer This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. This model was built using [recobo/agriculture-bert-uncased](https://huggingface.co/recobo/agriculture-bert-uncased), which is a BERT model trained on 6.5 million passages from the agricultural domain. Hence, this model is expected to perform well on sentence similarity tasks specifically for agricultural text data. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["A man is eating food.", "A man is eating a piece of bread"] model = SentenceTransformer('recobo/agri-sentence-transformer') embeddings = model.encode(sentences) print(embeddings)
{"language": "english", "tags": ["sentence-transformers", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
asanwari/agriculture-sentence-transformer
null
[ "sentence-transformers", "sentence-similarity", "transformers", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "english" ]
TAGS #sentence-transformers #sentence-similarity #transformers #endpoints_compatible #region-us
# recobo/agri-sentence-transformer This is a sentence-transformers model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. This model was built using recobo/agriculture-bert-uncased, which is a BERT model trained on 6.5 million passages from the agricultural domain. Hence, this model is expected to perform well on sentence similarity tasks specifically for agricultural text data. ## Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: '''python from sentence_transformers import SentenceTransformer sentences = ["A man is eating food.", "A man is eating a piece of bread"] model = SentenceTransformer('recobo/agri-sentence-transformer') embeddings = URL(sentences) print(embeddings)
[ "# recobo/agri-sentence-transformer\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.\nThis model was built using recobo/agriculture-bert-uncased, which is a BERT model trained on 6.5 million passages from the agricultural domain. Hence, this model is expected to perform well on sentence similarity tasks specifically for agricultural text data.", "## Usage (Sentence-Transformers)\nUsing this model becomes easy when you have sentence-transformers installed:\n\nThen you can use the model like this:\n'''python\nfrom sentence_transformers import SentenceTransformer\nsentences = [\"A man is eating food.\", \"A man is eating a piece of bread\"]\n\nmodel = SentenceTransformer('recobo/agri-sentence-transformer')\nembeddings = URL(sentences)\nprint(embeddings)" ]
[ "TAGS\n#sentence-transformers #sentence-similarity #transformers #endpoints_compatible #region-us \n", "# recobo/agri-sentence-transformer\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.\nThis model was built using recobo/agriculture-bert-uncased, which is a BERT model trained on 6.5 million passages from the agricultural domain. Hence, this model is expected to perform well on sentence similarity tasks specifically for agricultural text data.", "## Usage (Sentence-Transformers)\nUsing this model becomes easy when you have sentence-transformers installed:\n\nThen you can use the model like this:\n'''python\nfrom sentence_transformers import SentenceTransformer\nsentences = [\"A man is eating food.\", \"A man is eating a piece of bread\"]\n\nmodel = SentenceTransformer('recobo/agri-sentence-transformer')\nembeddings = URL(sentences)\nprint(embeddings)" ]
feature-extraction
transformers
# SEW-D-base [SEW-D by ASAPP Research](https://github.com/asappresearch/sew) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi **Abstract** This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under https://github.com/asappresearch/sew#model-checkpoints . # Usage See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
{"language": "en", "license": "apache-2.0", "tags": ["speech"], "datasets": ["librispeech_asr"]}
asapp/sew-d-base-100k
null
[ "transformers", "pytorch", "sew-d", "feature-extraction", "speech", "en", "dataset:librispeech_asr", "arxiv:2109.06870", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2109.06870" ]
[ "en" ]
TAGS #transformers #pytorch #sew-d #feature-extraction #speech #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #endpoints_compatible #region-us
# SEW-D-base SEW-D by ASAPP Research The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi Abstract This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under URL . # Usage See this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWDForCTC'.
[ "# SEW-D-base\n\nSEW-D by ASAPP Research\n\nThe base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...\n\nPaper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition\n\nAuthors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi\n\nAbstract\nThis paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWDForCTC'." ]
[ "TAGS\n#transformers #pytorch #sew-d #feature-extraction #speech #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #endpoints_compatible #region-us \n", "# SEW-D-base\n\nSEW-D by ASAPP Research\n\nThe base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...\n\nPaper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition\n\nAuthors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi\n\nAbstract\nThis paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWDForCTC'." ]
feature-extraction
transformers
# SEW-D-base+ [SEW-D by ASAPP Research](https://github.com/asappresearch/sew) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi **Abstract** This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under https://github.com/asappresearch/sew#model-checkpoints . # Usage See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
{"language": "en", "license": "apache-2.0", "tags": ["speech"], "datasets": ["librispeech_asr"]}
asapp/sew-d-base-plus-100k
null
[ "transformers", "pytorch", "sew-d", "feature-extraction", "speech", "en", "dataset:librispeech_asr", "arxiv:2109.06870", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2109.06870" ]
[ "en" ]
TAGS #transformers #pytorch #sew-d #feature-extraction #speech #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #endpoints_compatible #region-us
# SEW-D-base+ SEW-D by ASAPP Research The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi Abstract This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under URL . # Usage See this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWDForCTC'.
[ "# SEW-D-base+\n\nSEW-D by ASAPP Research\n\nThe base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...\n\nPaper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition\n\nAuthors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi\n\nAbstract\nThis paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWDForCTC'." ]
[ "TAGS\n#transformers #pytorch #sew-d #feature-extraction #speech #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #endpoints_compatible #region-us \n", "# SEW-D-base+\n\nSEW-D by ASAPP Research\n\nThe base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...\n\nPaper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition\n\nAuthors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi\n\nAbstract\nThis paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWDForCTC'." ]
automatic-speech-recognition
transformers
# SEW-D-base+ [SEW-D by ASAPP Research](https://github.com/asappresearch/sew) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi **Abstract** This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under https://github.com/asappresearch/sew#model-checkpoints . # Usage To transcribe audio files the model can be used as a standalone acoustic model as follows: ```python from transformers import Wav2Vec2Processor, SEWDForCTC from datasets import load_dataset import soundfile as sf import torch # load the model and preprocessor processor = Wav2Vec2Processor.from_pretrained("asapp/sew-d-base-plus-400k-ft-ls100h") model = SEWDForCTC.from_pretrained("asapp/sew-d-base-plus-400k-ft-ls100h") # load the dummy dataset with speech samples ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") # preprocess input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values # Batch size 1 # retrieve logits logits = model(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) ``` ## Evaluation This code snippet shows how to evaluate **asapp/sew-d-base-plus-400k-ft-ls100h** on LibriSpeech's "clean" and "other" test data. ```python from datasets import load_dataset from transformers import SEWDForCTC, Wav2Vec2Processor import torch from jiwer import wer librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") model = SEWDForCTC.from_pretrained("asapp/sew-d-base-plus-400k-ft-ls100h").to("cuda") processor = Wav2Vec2Processor.from_pretrained("asapp/sew-d-base-plus-400k-ft-ls100h") def map_to_pred(batch): input_values = processor(batch["audio"][0]["array"], sampling_rate=16000, return_tensors="pt", padding="longest").input_values with torch.no_grad(): logits = model(input_values.to("cuda")).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) batch["transcription"] = transcription return batch result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"]) print("WER:", wer(result["text"], result["transcription"])) ``` *Result (WER)*: | "clean" | "other" | | --- | --- | | 4.34 | 9.45 |
{"language": "en", "license": "apache-2.0", "tags": ["audio", "speech", "automatic-speech-recognition", "hf-asr-leaderboard"], "datasets": ["librispeech_asr"], "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}], "model-index": [{"name": "sew-d-base-plus-400k-ft-ls100h", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "LibriSpeech (clean)", "type": "librispeech_asr", "config": "clean", "split": "test", "args": {"language": "en"}}, "metrics": [{"type": "wer", "value": 4.34, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "LibriSpeech (other)", "type": "librispeech_asr", "config": "other", "split": "test", "args": {"language": "en"}}, "metrics": [{"type": "wer", "value": 9.45, "name": "Test WER"}]}]}]}
asapp/sew-d-base-plus-400k-ft-ls100h
null
[ "transformers", "pytorch", "sew-d", "automatic-speech-recognition", "audio", "speech", "hf-asr-leaderboard", "en", "dataset:librispeech_asr", "arxiv:2109.06870", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2109.06870" ]
[ "en" ]
TAGS #transformers #pytorch #sew-d #automatic-speech-recognition #audio #speech #hf-asr-leaderboard #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
SEW-D-base+ =========== SEW-D by ASAPP Research The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi Abstract This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under URL . Usage ===== To transcribe audio files the model can be used as a standalone acoustic model as follows: Evaluation ---------- This code snippet shows how to evaluate asapp/sew-d-base-plus-400k-ft-ls100h on LibriSpeech's "clean" and "other" test data. *Result (WER)*:
[]
[ "TAGS\n#transformers #pytorch #sew-d #automatic-speech-recognition #audio #speech #hf-asr-leaderboard #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n" ]
feature-extraction
transformers
# SEW-D-base+ [SEW-D by ASAPP Research](https://github.com/asappresearch/sew) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi **Abstract** This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under https://github.com/asappresearch/sew#model-checkpoints . # Usage See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
{"language": "en", "license": "apache-2.0", "tags": ["speech"], "datasets": ["librispeech_asr"]}
asapp/sew-d-base-plus-400k
null
[ "transformers", "pytorch", "sew-d", "feature-extraction", "speech", "en", "dataset:librispeech_asr", "arxiv:2109.06870", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2109.06870" ]
[ "en" ]
TAGS #transformers #pytorch #sew-d #feature-extraction #speech #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #endpoints_compatible #region-us
# SEW-D-base+ SEW-D by ASAPP Research The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi Abstract This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under URL . # Usage See this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWDForCTC'.
[ "# SEW-D-base+\n\nSEW-D by ASAPP Research\n\nThe base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...\n\nPaper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition\n\nAuthors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi\n\nAbstract\nThis paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWDForCTC'." ]
[ "TAGS\n#transformers #pytorch #sew-d #feature-extraction #speech #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #endpoints_compatible #region-us \n", "# SEW-D-base+\n\nSEW-D by ASAPP Research\n\nThe base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...\n\nPaper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition\n\nAuthors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi\n\nAbstract\nThis paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWDForCTC'." ]
feature-extraction
transformers
# SEW-D-mid [SEW-D by ASAPP Research](https://github.com/asappresearch/sew) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi **Abstract** This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under https://github.com/asappresearch/sew#model-checkpoints . # Usage See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
{"language": "en", "license": "apache-2.0", "tags": ["speech"], "datasets": ["librispeech_asr"]}
asapp/sew-d-mid-100k
null
[ "transformers", "pytorch", "sew-d", "feature-extraction", "speech", "en", "dataset:librispeech_asr", "arxiv:2109.06870", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2109.06870" ]
[ "en" ]
TAGS #transformers #pytorch #sew-d #feature-extraction #speech #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #endpoints_compatible #region-us
# SEW-D-mid SEW-D by ASAPP Research The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi Abstract This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under URL . # Usage See this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWDForCTC'.
[ "# SEW-D-mid\n\nSEW-D by ASAPP Research\n\nThe base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...\n\nPaper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition\n\nAuthors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi\n\nAbstract\nThis paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWDForCTC'." ]
[ "TAGS\n#transformers #pytorch #sew-d #feature-extraction #speech #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #endpoints_compatible #region-us \n", "# SEW-D-mid\n\nSEW-D by ASAPP Research\n\nThe base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...\n\nPaper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition\n\nAuthors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi\n\nAbstract\nThis paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWDForCTC'." ]
automatic-speech-recognition
transformers
# SEW-D-mid [SEW-D by ASAPP Research](https://github.com/asappresearch/sew) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi **Abstract** This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under https://github.com/asappresearch/sew#model-checkpoints . # Usage To transcribe audio files the model can be used as a standalone acoustic model as follows: ```python from transformers import Wav2Vec2Processor, SEWDForCTC from datasets import load_dataset import soundfile as sf import torch # load the model and preprocessor processor = Wav2Vec2Processor.from_pretrained("asapp/sew-d-mid-400k-ft-ls100h") model = SEWDForCTC.from_pretrained("asapp/sew-d-mid-400k-ft-ls100h") # load the dummy dataset with speech samples ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") # preprocess input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values # Batch size 1 # retrieve logits logits = model(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) ``` ## Evaluation This code snippet shows how to evaluate **asapp/sew-d-mid-400k-ft-ls100hh** on LibriSpeech's "clean" and "other" test data. ```python from datasets import load_dataset from transformers import SEWDForCTC, Wav2Vec2Processor import torch from jiwer import wer librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") model = SEWDForCTC.from_pretrained("asapp/sew-d-mid-400k-ft-ls100h").to("cuda") processor = Wav2Vec2Processor.from_pretrained("asapp/sew-d-mid-400k-ft-ls100h") def map_to_pred(batch): input_values = processor(batch["audio"][0]["array"], sampling_rate=16000, return_tensors="pt", padding="longest").input_values with torch.no_grad(): logits = model(input_values.to("cuda")).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) batch["transcription"] = transcription return batch result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"]) print("WER:", wer(result["text"], result["transcription"])) ``` *Result (WER)*: | "clean" | "other" | | --- | --- | | 4.94 | 11.51 |
{"language": "en", "license": "apache-2.0", "tags": ["audio", "speech", "automatic-speech-recognition", "hf-asr-leaderboard"], "datasets": ["librispeech_asr"], "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}], "model-index": [{"name": "sew-d-mid-400k-ft-ls100h", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "LibriSpeech (clean)", "type": "librispeech_asr", "config": "clean", "split": "test", "args": {"language": "en"}}, "metrics": [{"type": "wer", "value": 4.94, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "LibriSpeech (other)", "type": "librispeech_asr", "config": "other", "split": "test", "args": {"language": "en"}}, "metrics": [{"type": "wer", "value": 11.51, "name": "Test WER"}]}]}]}
asapp/sew-d-mid-400k-ft-ls100h
null
[ "transformers", "pytorch", "sew-d", "automatic-speech-recognition", "audio", "speech", "hf-asr-leaderboard", "en", "dataset:librispeech_asr", "arxiv:2109.06870", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2109.06870" ]
[ "en" ]
TAGS #transformers #pytorch #sew-d #automatic-speech-recognition #audio #speech #hf-asr-leaderboard #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #model-index #endpoints_compatible #region-us
SEW-D-mid ========= SEW-D by ASAPP Research The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi Abstract This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under URL . Usage ===== To transcribe audio files the model can be used as a standalone acoustic model as follows: Evaluation ---------- This code snippet shows how to evaluate asapp/sew-d-mid-400k-ft-ls100hh on LibriSpeech's "clean" and "other" test data. *Result (WER)*:
[]
[ "TAGS\n#transformers #pytorch #sew-d #automatic-speech-recognition #audio #speech #hf-asr-leaderboard #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n" ]
feature-extraction
transformers
# SEW-D-mid [SEW-D by ASAPP Research](https://github.com/asappresearch/sew) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi **Abstract** This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under https://github.com/asappresearch/sew#model-checkpoints . # Usage See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
{"language": "en", "license": "apache-2.0", "tags": ["speech"], "datasets": ["librispeech_asr"]}
asapp/sew-d-mid-400k
null
[ "transformers", "pytorch", "sew-d", "feature-extraction", "speech", "en", "dataset:librispeech_asr", "arxiv:2109.06870", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2109.06870" ]
[ "en" ]
TAGS #transformers #pytorch #sew-d #feature-extraction #speech #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #endpoints_compatible #region-us
# SEW-D-mid SEW-D by ASAPP Research The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi Abstract This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under URL . # Usage See this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWDForCTC'.
[ "# SEW-D-mid\n\nSEW-D by ASAPP Research\n\nThe base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...\n\nPaper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition\n\nAuthors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi\n\nAbstract\nThis paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWDForCTC'." ]
[ "TAGS\n#transformers #pytorch #sew-d #feature-extraction #speech #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #endpoints_compatible #region-us \n", "# SEW-D-mid\n\nSEW-D by ASAPP Research\n\nThe base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...\n\nPaper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition\n\nAuthors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi\n\nAbstract\nThis paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWDForCTC'." ]
feature-extraction
transformers
# SEW-D-mid [SEW-D by ASAPP Research](https://github.com/asappresearch/sew) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi **Abstract** This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under https://github.com/asappresearch/sew#model-checkpoints . # Usage See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
{"language": "en", "license": "apache-2.0", "tags": ["speech"], "datasets": ["librispeech_asr"]}
asapp/sew-d-mid-k127-100k
null
[ "transformers", "pytorch", "sew-d", "feature-extraction", "speech", "en", "dataset:librispeech_asr", "arxiv:2109.06870", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2109.06870" ]
[ "en" ]
TAGS #transformers #pytorch #sew-d #feature-extraction #speech #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #endpoints_compatible #region-us
# SEW-D-mid SEW-D by ASAPP Research The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi Abstract This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under URL . # Usage See this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWDForCTC'.
[ "# SEW-D-mid\n\nSEW-D by ASAPP Research\n\nThe base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...\n\nPaper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition\n\nAuthors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi\n\nAbstract\nThis paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWDForCTC'." ]
[ "TAGS\n#transformers #pytorch #sew-d #feature-extraction #speech #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #endpoints_compatible #region-us \n", "# SEW-D-mid\n\nSEW-D by ASAPP Research\n\nThe base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...\n\nPaper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition\n\nAuthors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi\n\nAbstract\nThis paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWDForCTC'." ]
automatic-speech-recognition
transformers
# SEW-D-mid-k127 [SEW-D by ASAPP Research](https://github.com/asappresearch/sew) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi **Abstract** This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under https://github.com/asappresearch/sew#model-checkpoints . # Usage To transcribe audio files the model can be used as a standalone acoustic model as follows: ```python from transformers import Wav2Vec2Processor, SEWDForCTC from datasets import load_dataset import soundfile as sf import torch # load the model and preprocessor processor = Wav2Vec2Processor.from_pretrained("asapp/sew-d-mid-k127-400k-ft-ls100h") model = SEWDForCTC.from_pretrained("asapp/sew-d-mid-k127-400k-ft-ls100h") # load the dummy dataset with speech samples ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") # preprocess input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values # Batch size 1 # retrieve logits logits = model(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) ``` ## Evaluation This code snippet shows how to evaluate **asapp/sew-d-mid-k127-400k-ft-ls100hh** on LibriSpeech's "clean" and "other" test data. ```python from datasets import load_dataset from transformers import SEWDForCTC, Wav2Vec2Processor import torch from jiwer import wer librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") model = SEWDForCTC.from_pretrained("asapp/sew-d-mid-k127-400k-ft-ls100h").to("cuda") processor = Wav2Vec2Processor.from_pretrained("asapp/sew-d-mid-k127-400k-ft-ls100h") def map_to_pred(batch): input_values = processor(batch["audio"][0]["array"], sampling_rate=16000, return_tensors="pt", padding="longest").input_values with torch.no_grad(): logits = model(input_values.to("cuda")).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) batch["transcription"] = transcription return batch result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"]) print("WER:", wer(result["text"], result["transcription"])) ``` *Result (WER)*: | "clean" | "other" | | --- | --- | | 4.99 | 10.95 |
{"language": "en", "license": "apache-2.0", "tags": ["audio", "speech", "automatic-speech-recognition", "hf-asr-leaderboard"], "datasets": ["librispeech_asr"], "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}], "model-index": [{"name": "sew-d-mid-k127-400k-ft-ls100h", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "LibriSpeech (clean)", "type": "librispeech_asr", "config": "clean", "split": "test", "args": {"language": "en"}}, "metrics": [{"type": "wer", "value": 4.99, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "LibriSpeech (other)", "type": "librispeech_asr", "config": "other", "split": "test", "args": {"language": "en"}}, "metrics": [{"type": "wer", "value": 10.95, "name": "Test WER"}]}]}]}
asapp/sew-d-mid-k127-400k-ft-ls100h
null
[ "transformers", "pytorch", "safetensors", "sew-d", "automatic-speech-recognition", "audio", "speech", "hf-asr-leaderboard", "en", "dataset:librispeech_asr", "arxiv:2109.06870", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2109.06870" ]
[ "en" ]
TAGS #transformers #pytorch #safetensors #sew-d #automatic-speech-recognition #audio #speech #hf-asr-leaderboard #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #model-index #endpoints_compatible #region-us
SEW-D-mid-k127 ============== SEW-D by ASAPP Research The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi Abstract This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under URL . Usage ===== To transcribe audio files the model can be used as a standalone acoustic model as follows: Evaluation ---------- This code snippet shows how to evaluate asapp/sew-d-mid-k127-400k-ft-ls100hh on LibriSpeech's "clean" and "other" test data. *Result (WER)*:
[]
[ "TAGS\n#transformers #pytorch #safetensors #sew-d #automatic-speech-recognition #audio #speech #hf-asr-leaderboard #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n" ]
feature-extraction
transformers
# SEW-D-mid [SEW-D by ASAPP Research](https://github.com/asappresearch/sew) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi **Abstract** This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under https://github.com/asappresearch/sew#model-checkpoints . # Usage See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
{"language": "en", "license": "apache-2.0", "tags": ["speech"], "datasets": ["librispeech_asr"]}
asapp/sew-d-mid-k127-400k
null
[ "transformers", "pytorch", "sew-d", "feature-extraction", "speech", "en", "dataset:librispeech_asr", "arxiv:2109.06870", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2109.06870" ]
[ "en" ]
TAGS #transformers #pytorch #sew-d #feature-extraction #speech #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #endpoints_compatible #region-us
# SEW-D-mid SEW-D by ASAPP Research The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi Abstract This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under URL . # Usage See this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWDForCTC'.
[ "# SEW-D-mid\n\nSEW-D by ASAPP Research\n\nThe base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...\n\nPaper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition\n\nAuthors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi\n\nAbstract\nThis paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWDForCTC'." ]
[ "TAGS\n#transformers #pytorch #sew-d #feature-extraction #speech #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #endpoints_compatible #region-us \n", "# SEW-D-mid\n\nSEW-D by ASAPP Research\n\nThe base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...\n\nPaper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition\n\nAuthors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi\n\nAbstract\nThis paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWDForCTC'." ]
feature-extraction
transformers
# SEW-D-small [SEW-D by ASAPP Research](https://github.com/asappresearch/sew) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi **Abstract** This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under https://github.com/asappresearch/sew#model-checkpoints . # Usage See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
{"language": "en", "license": "apache-2.0", "tags": ["speech"], "datasets": ["librispeech_asr"]}
asapp/sew-d-small-100k
null
[ "transformers", "pytorch", "sew-d", "feature-extraction", "speech", "en", "dataset:librispeech_asr", "arxiv:2109.06870", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2109.06870" ]
[ "en" ]
TAGS #transformers #pytorch #sew-d #feature-extraction #speech #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #endpoints_compatible #region-us
# SEW-D-small SEW-D by ASAPP Research The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi Abstract This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under URL . # Usage See this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWDForCTC'.
[ "# SEW-D-small\n\nSEW-D by ASAPP Research\n\nThe base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...\n\nPaper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition\n\nAuthors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi\n\nAbstract\nThis paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWDForCTC'." ]
[ "TAGS\n#transformers #pytorch #sew-d #feature-extraction #speech #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #endpoints_compatible #region-us \n", "# SEW-D-small\n\nSEW-D by ASAPP Research\n\nThe base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...\n\nPaper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition\n\nAuthors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi\n\nAbstract\nThis paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWDForCTC'." ]
automatic-speech-recognition
transformers
# SEW-D-tiny [SEW-D by ASAPP Research](https://github.com/asappresearch/sew) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi **Abstract** This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under https://github.com/asappresearch/sew#model-checkpoints . # Usage To transcribe audio files the model can be used as a standalone acoustic model as follows: ```python from transformers import Wav2Vec2Processor, SEWDForCTC from datasets import load_dataset import soundfile as sf import torch # load the model and preprocessor processor = Wav2Vec2Processor.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h") model = SEWDForCTC.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h") # load the dummy dataset with speech samples ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") # preprocess input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values # Batch size 1 # retrieve logits logits = model(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) ``` ## Evaluation This code snippet shows how to evaluate **asapp/sew-d-tiny-100k-ft-ls100h** on LibriSpeech's "clean" and "other" test data. ```python from datasets import load_dataset from transformers import SEWDForCTC, Wav2Vec2Processor import torch from jiwer import wer librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") model = SEWDForCTC.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h").to("cuda") processor = Wav2Vec2Processor.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h") def map_to_pred(batch): input_values = processor(batch["audio"][0]["array"], sampling_rate=16000, return_tensors="pt", padding="longest").input_values with torch.no_grad(): logits = model(input_values.to("cuda")).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) batch["transcription"] = transcription return batch result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"]) print("WER:", wer(result["text"], result["transcription"])) ``` *Result (WER)*: | "clean" | "other" | | --- | --- | | 10.47 | 22.73 |
{"language": "en", "license": "apache-2.0", "tags": ["audio", "speech", "automatic-speech-recognition", "hf-asr-leaderboard"], "datasets": ["librispeech_asr"], "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}], "model-index": [{"name": "sew-d-tiny-100k-ft-ls100h", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "LibriSpeech (clean)", "type": "librispeech_asr", "config": "clean", "split": "test", "args": {"language": "en"}}, "metrics": [{"type": "wer", "value": 10.47, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "LibriSpeech (other)", "type": "librispeech_asr", "config": "other", "split": "test", "args": {"language": "en"}}, "metrics": [{"type": "wer", "value": 22.73, "name": "Test WER"}]}]}]}
asapp/sew-d-tiny-100k-ft-ls100h
null
[ "transformers", "pytorch", "safetensors", "sew-d", "automatic-speech-recognition", "audio", "speech", "hf-asr-leaderboard", "en", "dataset:librispeech_asr", "arxiv:2109.06870", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2109.06870" ]
[ "en" ]
TAGS #transformers #pytorch #safetensors #sew-d #automatic-speech-recognition #audio #speech #hf-asr-leaderboard #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
SEW-D-tiny ========== SEW-D by ASAPP Research The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi Abstract This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under URL . Usage ===== To transcribe audio files the model can be used as a standalone acoustic model as follows: Evaluation ---------- This code snippet shows how to evaluate asapp/sew-d-tiny-100k-ft-ls100h on LibriSpeech's "clean" and "other" test data. *Result (WER)*:
[]
[ "TAGS\n#transformers #pytorch #safetensors #sew-d #automatic-speech-recognition #audio #speech #hf-asr-leaderboard #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n" ]
feature-extraction
transformers
# SEW-D-tiny [SEW-D by ASAPP Research](https://github.com/asappresearch/sew) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi **Abstract** This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under https://github.com/asappresearch/sew#model-checkpoints . # Usage See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
{"language": "en", "license": "apache-2.0", "tags": ["speech"], "datasets": ["librispeech_asr"]}
asapp/sew-d-tiny-100k
null
[ "transformers", "pytorch", "safetensors", "sew-d", "feature-extraction", "speech", "en", "dataset:librispeech_asr", "arxiv:2109.06870", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2109.06870" ]
[ "en" ]
TAGS #transformers #pytorch #safetensors #sew-d #feature-extraction #speech #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #endpoints_compatible #region-us
# SEW-D-tiny SEW-D by ASAPP Research The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi Abstract This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under URL . # Usage See this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWDForCTC'.
[ "# SEW-D-tiny\n\nSEW-D by ASAPP Research\n\nThe base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...\n\nPaper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition\n\nAuthors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi\n\nAbstract\nThis paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWDForCTC'." ]
[ "TAGS\n#transformers #pytorch #safetensors #sew-d #feature-extraction #speech #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #endpoints_compatible #region-us \n", "# SEW-D-tiny\n\nSEW-D by ASAPP Research\n\nThe base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...\n\nPaper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition\n\nAuthors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi\n\nAbstract\nThis paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWDForCTC'." ]
feature-extraction
transformers
# SEW-mid [SEW by ASAPP Research](https://github.com/asappresearch/sew) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi **Abstract** This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under https://github.com/asappresearch/sew#model-checkpoints . # Usage See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWForCTC`.
{"language": "en", "license": "apache-2.0", "tags": ["speech"], "datasets": ["librispeech_asr"]}
asapp/sew-mid-100k
null
[ "transformers", "pytorch", "safetensors", "sew", "feature-extraction", "speech", "en", "dataset:librispeech_asr", "arxiv:2109.06870", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2109.06870" ]
[ "en" ]
TAGS #transformers #pytorch #safetensors #sew #feature-extraction #speech #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #endpoints_compatible #region-us
# SEW-mid SEW by ASAPP Research The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi Abstract This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under URL . # Usage See this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWForCTC'.
[ "# SEW-mid\n\nSEW by ASAPP Research\n\nThe base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...\n\nPaper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition\n\nAuthors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi\n\nAbstract\nThis paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWForCTC'." ]
[ "TAGS\n#transformers #pytorch #safetensors #sew #feature-extraction #speech #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #endpoints_compatible #region-us \n", "# SEW-mid\n\nSEW by ASAPP Research\n\nThe base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...\n\nPaper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition\n\nAuthors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi\n\nAbstract\nThis paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWForCTC'." ]
feature-extraction
transformers
# SEW-small [SEW by ASAPP Research](https://github.com/asappresearch/sew) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi **Abstract** This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under https://github.com/asappresearch/sew#model-checkpoints . # Usage See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWForCTC`.
{"language": "en", "license": "apache-2.0", "tags": ["speech"], "datasets": ["librispeech_asr"]}
asapp/sew-small-100k
null
[ "transformers", "pytorch", "sew", "feature-extraction", "speech", "en", "dataset:librispeech_asr", "arxiv:2109.06870", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2109.06870" ]
[ "en" ]
TAGS #transformers #pytorch #sew #feature-extraction #speech #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #endpoints_compatible #region-us
# SEW-small SEW by ASAPP Research The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi Abstract This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under URL . # Usage See this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWForCTC'.
[ "# SEW-small\n\nSEW by ASAPP Research\n\nThe base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...\n\nPaper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition\n\nAuthors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi\n\nAbstract\nThis paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWForCTC'." ]
[ "TAGS\n#transformers #pytorch #sew #feature-extraction #speech #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #endpoints_compatible #region-us \n", "# SEW-small\n\nSEW by ASAPP Research\n\nThe base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...\n\nPaper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition\n\nAuthors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi\n\nAbstract\nThis paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWForCTC'." ]
automatic-speech-recognition
transformers
# SEW-tiny [SEW by ASAPP Research](https://github.com/asappresearch/sew) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi **Abstract** This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under https://github.com/asappresearch/sew#model-checkpoints . # Usage To transcribe audio files the model can be used as a standalone acoustic model as follows: ```python from transformers import Wav2Vec2Processor, SEWForCTC from datasets import load_dataset import soundfile as sf import torch # load the model and preprocessor processor = Wav2Vec2Processor.from_pretrained("asapp/sew-tiny-100k-ft-ls100h") model = SEWForCTC.from_pretrained("asapp/sew-tiny-100k-ft-ls100h") # load the dummy dataset with speech samples ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") # preprocess input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values # Batch size 1 # retrieve logits logits = model(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) ``` ## Evaluation This code snippet shows how to evaluate **asapp/sew-tiny-100k-ft-ls100h** on LibriSpeech's "clean" and "other" test data. ```python from datasets import load_dataset from transformers import SEWForCTC, Wav2Vec2Processor import torch from jiwer import wer librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") model = SEWForCTC.from_pretrained("asapp/sew-tiny-100k-ft-ls100h").to("cuda") processor = Wav2Vec2Processor.from_pretrained("asapp/sew-tiny-100k-ft-ls100h") def map_to_pred(batch): input_values = processor(batch["audio"][0]["array"], sampling_rate=16000, return_tensors="pt", padding="longest").input_values with torch.no_grad(): logits = model(input_values.to("cuda")).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) batch["transcription"] = transcription return batch result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"]) print("WER:", wer(result["text"], result["transcription"])) ``` *Result (WER)*: | "clean" | "other" | | --- | --- | | 10.61 | 23.74 |
{"language": "en", "license": "apache-2.0", "tags": ["audio", "speech", "automatic-speech-recognition", "hf-asr-leaderboard"], "datasets": ["librispeech_asr"], "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}], "model-index": [{"name": "sew-tiny-100k-ft-ls100h", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "LibriSpeech (clean)", "type": "librispeech_asr", "config": "clean", "split": "test", "args": {"language": "en"}}, "metrics": [{"type": "wer", "value": 10.61, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "LibriSpeech (other)", "type": "librispeech_asr", "config": "other", "split": "test", "args": {"language": "en"}}, "metrics": [{"type": "wer", "value": 23.74, "name": "Test WER"}]}]}]}
asapp/sew-tiny-100k-ft-ls100h
null
[ "transformers", "pytorch", "safetensors", "sew", "automatic-speech-recognition", "audio", "speech", "hf-asr-leaderboard", "en", "dataset:librispeech_asr", "arxiv:2109.06870", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2109.06870" ]
[ "en" ]
TAGS #transformers #pytorch #safetensors #sew #automatic-speech-recognition #audio #speech #hf-asr-leaderboard #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #model-index #endpoints_compatible #region-us
SEW-tiny ======== SEW by ASAPP Research The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi Abstract This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under URL . Usage ===== To transcribe audio files the model can be used as a standalone acoustic model as follows: Evaluation ---------- This code snippet shows how to evaluate asapp/sew-tiny-100k-ft-ls100h on LibriSpeech's "clean" and "other" test data. *Result (WER)*:
[]
[ "TAGS\n#transformers #pytorch #safetensors #sew #automatic-speech-recognition #audio #speech #hf-asr-leaderboard #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n" ]
feature-extraction
transformers
# SEW-tiny [SEW by ASAPP Research](https://github.com/asappresearch/sew) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi **Abstract** This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under https://github.com/asappresearch/sew#model-checkpoints . # Usage See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWForCTC`.
{"language": "en", "license": "apache-2.0", "tags": ["speech"], "datasets": ["librispeech_asr"]}
asapp/sew-tiny-100k
null
[ "transformers", "pytorch", "safetensors", "sew", "feature-extraction", "speech", "en", "dataset:librispeech_asr", "arxiv:2109.06870", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2109.06870" ]
[ "en" ]
TAGS #transformers #pytorch #safetensors #sew #feature-extraction #speech #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #endpoints_compatible #region-us
# SEW-tiny SEW by ASAPP Research The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi Abstract This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under URL . # Usage See this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWForCTC'.
[ "# SEW-tiny\n\nSEW by ASAPP Research\n\nThe base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...\n\nPaper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition\n\nAuthors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi\n\nAbstract\nThis paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWForCTC'." ]
[ "TAGS\n#transformers #pytorch #safetensors #sew #feature-extraction #speech #en #dataset-librispeech_asr #arxiv-2109.06870 #license-apache-2.0 #endpoints_compatible #region-us \n", "# SEW-tiny\n\nSEW by ASAPP Research\n\nThe base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...\n\nPaper: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition\n\nAuthors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi\n\nAbstract\nThis paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'SEWForCTC'." ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["xsum"], "model-index": [{"name": "t5-small-finetuned-xsum", "results": []}]}
aseda/t5-small-finetuned-xsum
null
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-xsum #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# t5-small-finetuned-xsum This model is a fine-tuned version of t5-small on the xsum dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
[ "# t5-small-finetuned-xsum\n\nThis model is a fine-tuned version of t5-small on the xsum dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.10.0+cu111\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-xsum #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# t5-small-finetuned-xsum\n\nThis model is a fine-tuned version of t5-small on the xsum dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.10.0+cu111\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 1.0171 - Mae: 0.5310 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1404 | 1.0 | 308 | 1.0720 | 0.5398 | | 0.9805 | 2.0 | 616 | 1.0171 | 0.5310 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "model-index": [{"name": "xlm-roberta-base-finetuned-marc", "results": []}]}
ashish-chouhan/xlm-roberta-base-finetuned-marc
null
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "dataset:amazon_reviews_multi", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #region-us
xlm-roberta-base-finetuned-marc =============================== This model is a fine-tuned version of xlm-roberta-base on the amazon\_reviews\_multi dataset. It achieves the following results on the evaluation set: * Loss: 1.0171 * Mae: 0.5310 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.9.0+cu111 * Datasets 1.13.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
## Natural Don't Know Response Model Fine-tuned on [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) using a combination of a dependency-rule based data and [Quora Question Pairs(QQP)](https://huggingface.co/nlp/viewer/?dataset=quora) dataset for **Don't Know Response Generation** task. Additional information about this model: - Paper : [Saying No is An Art: Contextualized Fallback Responses for Unanswerable Dialogue Queries](https://arxiv.org/pdf/2012.01873.pdf) - Github Repo: https://github.com/kaustubhdhole/natural-dont-know #### How to use ```python from transformers import T5ForConditionalGeneration, T5Tokenizer model_name = "ashish-shrivastava/dont-know-response" model = T5ForConditionalGeneration.from_pretrained(model_name) tokenizer = T5Tokenizer.from_pretrained(model_name) input = "Where can I find good Italian food ?" input_ids = tokenizer.encode(input, return_tensors="pt") outputs = model.generate(input_ids) decoded_output = tokenizer.decode(outputs[0], skip_special_tokens=True) print(decoded_output) # I'm not sure where you can get good quality Italian food. ``` #### Hyperparameters ``` n_epochs = 2 base_LM_model = "T5-base" max_seq_len = 256 learning_rate = 3e-4 adam_epsilon = 1e-8 train_batch_size = 6 ``` #### BibTeX entry and citation info ```bibtex @misc{shrivastava2020saying, title={Saying No is An Art: Contextualized Fallback Responses for Unanswerable Dialogue Queries}, author={Ashish Shrivastava and Kaustubh Dhole and Abhinav Bhatt and Sharvani Raghunath}, year={2020}, eprint={2012.01873}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{}
ashish-shrivastava/dont-know-response
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "arxiv:2012.01873", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2012.01873" ]
[]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #arxiv-2012.01873 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
## Natural Don't Know Response Model Fine-tuned on Google's T5 using a combination of a dependency-rule based data and Quora Question Pairs(QQP) dataset for Don't Know Response Generation task. Additional information about this model: - Paper : Saying No is An Art: Contextualized Fallback Responses for Unanswerable Dialogue Queries - Github Repo: URL #### How to use #### Hyperparameters #### BibTeX entry and citation info
[ "## Natural Don't Know Response Model\n\nFine-tuned on Google's T5 using a combination of a dependency-rule based data and Quora Question Pairs(QQP) dataset for Don't Know Response Generation task.\n\nAdditional information about this model:\n- Paper : Saying No is An Art: Contextualized Fallback Responses for\nUnanswerable Dialogue Queries\n- Github Repo: URL", "#### How to use", "#### Hyperparameters", "#### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #arxiv-2012.01873 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "## Natural Don't Know Response Model\n\nFine-tuned on Google's T5 using a combination of a dependency-rule based data and Quora Question Pairs(QQP) dataset for Don't Know Response Generation task.\n\nAdditional information about this model:\n- Paper : Saying No is An Art: Contextualized Fallback Responses for\nUnanswerable Dialogue Queries\n- Github Repo: URL", "#### How to use", "#### Hyperparameters", "#### BibTeX entry and citation info" ]
text-classification
transformers
# The [ELECTRA-small](https://huggingface.co/ashraq/dv-electra-small) fine-tuned for news classification in Dhivehi
{"widget": [{"text": "\u078e\u07ab\u078e\u07a6\u078d\u07b0 \u0795\u07a8\u0786\u07b0\u0790\u07a6\u078d\u07b0 6 \u078e\u07ac \u0786\u07ac\u0789\u07ac\u0783\u07a7\u060c \u0787\u07ad\u0787\u07a6\u0787\u07a8 \u078e\u07ac \u0796\u07a7\u078b\u07ab\u0787\u07a8\u0782\u07b0 \u078a\u07aa\u0783\u07a8\u078a\u07a6\u0787\u07a8"}]}
ashraq/dv-electra-small-news-classification
null
[ "transformers", "pytorch", "electra", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #electra #text-classification #autotrain_compatible #endpoints_compatible #region-us
# The ELECTRA-small fine-tuned for news classification in Dhivehi
[ "# The ELECTRA-small fine-tuned for news classification in Dhivehi" ]
[ "TAGS\n#transformers #pytorch #electra #text-classification #autotrain_compatible #endpoints_compatible #region-us \n", "# The ELECTRA-small fine-tuned for news classification in Dhivehi" ]
sentence-similarity
sentence-transformers
# Dhivehi TSDAE News BERT This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('ashraq/tsdae-bert-base-dv-news-title') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('ashraq/tsdae-bert-base-dv-news-title') model = AutoModel.from_pretrained('ashraq/tsdae-bert-base-dv-news-title') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 7331 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.DenoisingAutoEncoderLoss.DenoisingAutoEncoderLoss` Parameters of the fit()-Method: ``` { "epochs": 3, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 0.00024 }, "scheduler": "constantlr", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 514, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
{"language": ["dv"], "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
ashraq/tsdae-bert-base-dv-news-title
null
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "dv", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "dv" ]
TAGS #sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #dv #endpoints_compatible #region-us
# Dhivehi TSDAE News BERT This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: ## Usage (HuggingFace Transformers) Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL ## Training The model was trained with the parameters: DataLoader: 'URL.dataloader.DataLoader' of length 7331 with parameters: Loss: 'sentence_transformers.losses.DenoisingAutoEncoderLoss.DenoisingAutoEncoderLoss' Parameters of the fit()-Method: ## Full Model Architecture ## Citing & Authors
[ "# Dhivehi TSDAE News BERT\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.", "## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 7331 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.DenoisingAutoEncoderLoss.DenoisingAutoEncoderLoss' \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors" ]
[ "TAGS\n#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #dv #endpoints_compatible #region-us \n", "# Dhivehi TSDAE News BERT\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.", "## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 7331 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.DenoisingAutoEncoderLoss.DenoisingAutoEncoderLoss' \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors" ]
fill-mask
transformers
# Gujarati-XLM-R-Base This model is finetuned over [XLM-RoBERTa](https://huggingface.co/xlm-roberta-base) (XLM-R) using its base variant with the Gujarati language using the [OSCAR](https://oscar-corpus.com/) monolingual dataset. We used the same masked language modelling (MLM) objective which was used for pretraining the XLM-R. As it is built over the pretrained XLM-R, we leveraged *Transfer Learning* by exploiting the knowledge from its parent model. ## Dataset OSCAR corpus contains several diverse datasets for different languages. We followed the work of [CamemBERT](https://www.aclweb.org/anthology/2020.acl-main.645/) who reported better performance with this diverse dataset as compared to the other large homogenous datasets. ## Preprocessing and Training Procedure Please visit [this link](https://github.com/ashwanitanwar/nmt-transfer-learning-xlm-r#6-finetuning-xlm-r) for the detailed procedure. ## Usage - This model can be used for further finetuning for different NLP tasks using the Gujarati language. - It can be used to generate contextualised word representations for the Gujarati words. - It can be used for domain adaptation. - It can be used to predict the missing words from the Gujarati sentences. ## Demo ### Using the model to predict missing words ``` from transformers import pipeline unmasker = pipeline('fill-mask', model='ashwani-tanwar/Gujarati-XLM-R-Base') pred_word = unmasker("અમદાવાદ એ ગુજરાતનું એક <mask> છે.") print(pred_word) ``` ``` [{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક શહેર છે.</s>', 'score': 0.9463568329811096, 'token': 85227, 'token_str': '▁શહેર'}, {'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક ગામ છે.</s>', 'score': 0.013311690650880337, 'token': 66346, 'token_str': '▁ગામ'}, {'sequence': '<s> અમદાવાદ એ ગુજરાતનું એકનગર છે.</s>', 'score': 0.012945962138473988, 'token': 69702, 'token_str': 'નગર'}, {'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક સ્થળ છે.</s>', 'score': 0.0045941537246108055, 'token': 135436, 'token_str': '▁સ્થળ'}, {'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક મહત્વ છે.</s>', 'score': 0.00402021361514926, 'token': 126763, 'token_str': '▁મહત્વ'}] ``` ### Using the model to generate contextualised word representations ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("ashwani-tanwar/Gujarati-XLM-R-Base") model = AutoModel.from_pretrained("ashwani-tanwar/Gujarati-XLM-R-Base") sentence = "અમદાવાદ એ ગુજરાતનું એક શહેર છે." encoded_sentence = tokenizer(sentence, return_tensors='pt') context_word_rep = model(**encoded_sentence) ```
{"language": "gu"}
ashwani-tanwar/Gujarati-XLM-R-Base
null
[ "transformers", "pytorch", "tf", "xlm-roberta", "fill-mask", "gu", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "gu" ]
TAGS #transformers #pytorch #tf #xlm-roberta #fill-mask #gu #autotrain_compatible #endpoints_compatible #region-us
# Gujarati-XLM-R-Base This model is finetuned over XLM-RoBERTa (XLM-R) using its base variant with the Gujarati language using the OSCAR monolingual dataset. We used the same masked language modelling (MLM) objective which was used for pretraining the XLM-R. As it is built over the pretrained XLM-R, we leveraged *Transfer Learning* by exploiting the knowledge from its parent model. ## Dataset OSCAR corpus contains several diverse datasets for different languages. We followed the work of CamemBERT who reported better performance with this diverse dataset as compared to the other large homogenous datasets. ## Preprocessing and Training Procedure Please visit this link for the detailed procedure. ## Usage - This model can be used for further finetuning for different NLP tasks using the Gujarati language. - It can be used to generate contextualised word representations for the Gujarati words. - It can be used for domain adaptation. - It can be used to predict the missing words from the Gujarati sentences. ## Demo ### Using the model to predict missing words ### Using the model to generate contextualised word representations
[ "# Gujarati-XLM-R-Base\r\n\r\n\r\nThis model is finetuned over XLM-RoBERTa (XLM-R) using its base variant with the Gujarati language using the OSCAR monolingual dataset. We used the same masked language modelling (MLM) objective which was used for pretraining the XLM-R. As it is built over the pretrained XLM-R, we leveraged *Transfer Learning* by exploiting the knowledge from its parent model.", "## Dataset\r\nOSCAR corpus contains several diverse datasets for different languages. We followed the work of CamemBERT who reported better performance with this diverse dataset as compared to the other large homogenous datasets.", "## Preprocessing and Training Procedure\r\nPlease visit this link for the detailed procedure.", "## Usage\r\n- This model can be used for further finetuning for different NLP tasks using the Gujarati language.\r\n- It can be used to generate contextualised word representations for the Gujarati words.\r\n- It can be used for domain adaptation.\r\n- It can be used to predict the missing words from the Gujarati sentences.", "## Demo\r\n ### Using the model to predict missing words\r\n \r\n \r\n ### Using the model to generate contextualised word representations" ]
[ "TAGS\n#transformers #pytorch #tf #xlm-roberta #fill-mask #gu #autotrain_compatible #endpoints_compatible #region-us \n", "# Gujarati-XLM-R-Base\r\n\r\n\r\nThis model is finetuned over XLM-RoBERTa (XLM-R) using its base variant with the Gujarati language using the OSCAR monolingual dataset. We used the same masked language modelling (MLM) objective which was used for pretraining the XLM-R. As it is built over the pretrained XLM-R, we leveraged *Transfer Learning* by exploiting the knowledge from its parent model.", "## Dataset\r\nOSCAR corpus contains several diverse datasets for different languages. We followed the work of CamemBERT who reported better performance with this diverse dataset as compared to the other large homogenous datasets.", "## Preprocessing and Training Procedure\r\nPlease visit this link for the detailed procedure.", "## Usage\r\n- This model can be used for further finetuning for different NLP tasks using the Gujarati language.\r\n- It can be used to generate contextualised word representations for the Gujarati words.\r\n- It can be used for domain adaptation.\r\n- It can be used to predict the missing words from the Gujarati sentences.", "## Demo\r\n ### Using the model to predict missing words\r\n \r\n \r\n ### Using the model to generate contextualised word representations" ]
fill-mask
transformers
# Gujarati-XLM-R-Large This model is finetuned over [XLM-RoBERTa](https://huggingface.co/xlm-roberta-large) (XLM-R) using its large variant with the Gujarati language using the [OSCAR](https://oscar-corpus.com/) monolingual dataset. We used the same masked language modelling (MLM) objective which was used for pretraining the XLM-R. As it is built over the pretrained XLM-R, we leveraged *Transfer Learning* by exploiting the knowledge from its parent model. ## Dataset OSCAR corpus contains several diverse datasets for different languages. We followed the work of [CamemBERT](https://www.aclweb.org/anthology/2020.acl-main.645/) who reported better performance with this diverse dataset as compared to the other large homogenous datasets. ## Preprocessing and Training Procedure Please visit [this link](https://github.com/ashwanitanwar/nmt-transfer-learning-xlm-r#6-finetuning-xlm-r) for the detailed procedure. ## Usage - This model can be used for further finetuning for different NLP tasks using the Gujarati language. - It can be used to generate contextualised word representations for the Gujarati words. - It can be used for domain adaptation. - It can be used to predict the missing words from the Gujarati sentences. ## Demo ### Using the model to predict missing words ``` from transformers import pipeline unmasker = pipeline('fill-mask', model='ashwani-tanwar/Gujarati-XLM-R-Large') pred_word = unmasker("અમદાવાદ એ ગુજરાતનું એક <mask> છે.") print(pred_word) ``` ``` [{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક શહેર છે.</s>', 'score': 0.9790881276130676, 'token': 85227, 'token_str': '▁શહેર'}, {'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક રાજ્ય છે.</s>', 'score': 0.004246668424457312, 'token': 63678, 'token_str': '▁રાજ્ય'}, {'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક ગામ છે.</s>', 'score': 0.0038021174259483814, 'token': 66346, 'token_str': '▁ગામ'}, {'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક મહત્વ છે.</s>', 'score': 0.002798238070681691, 'token': 126763, 'token_str': '▁મહત્વ'}, {'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક અમદાવાદ છે.</s>', 'score': 0.0021192911081016064, 'token': 69499, 'token_str': '▁અમદાવાદ'}] ``` ### Using the model to generate contextualised word representations ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("ashwani-tanwar/Gujarati-XLM-R-Large") model = AutoModel.from_pretrained("ashwani-tanwar/Gujarati-XLM-R-Large") sentence = "અમદાવાદ એ ગુજરાતનું એક શહેર છે." encoded_sentence = tokenizer(sentence, return_tensors='pt') context_word_rep = model(**encoded_sentence) ```
{"language": "gu"}
ashwani-tanwar/Gujarati-XLM-R-Large
null
[ "transformers", "pytorch", "tf", "xlm-roberta", "fill-mask", "gu", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "gu" ]
TAGS #transformers #pytorch #tf #xlm-roberta #fill-mask #gu #autotrain_compatible #endpoints_compatible #region-us
# Gujarati-XLM-R-Large This model is finetuned over XLM-RoBERTa (XLM-R) using its large variant with the Gujarati language using the OSCAR monolingual dataset. We used the same masked language modelling (MLM) objective which was used for pretraining the XLM-R. As it is built over the pretrained XLM-R, we leveraged *Transfer Learning* by exploiting the knowledge from its parent model. ## Dataset OSCAR corpus contains several diverse datasets for different languages. We followed the work of CamemBERT who reported better performance with this diverse dataset as compared to the other large homogenous datasets. ## Preprocessing and Training Procedure Please visit this link for the detailed procedure. ## Usage - This model can be used for further finetuning for different NLP tasks using the Gujarati language. - It can be used to generate contextualised word representations for the Gujarati words. - It can be used for domain adaptation. - It can be used to predict the missing words from the Gujarati sentences. ## Demo ### Using the model to predict missing words ### Using the model to generate contextualised word representations
[ "# Gujarati-XLM-R-Large\n\n\nThis model is finetuned over XLM-RoBERTa (XLM-R) using its large variant with the Gujarati language using the OSCAR monolingual dataset. We used the same masked language modelling (MLM) objective which was used for pretraining the XLM-R. As it is built over the pretrained XLM-R, we leveraged *Transfer Learning* by exploiting the knowledge from its parent model.", "## Dataset\nOSCAR corpus contains several diverse datasets for different languages. We followed the work of CamemBERT who reported better performance with this diverse dataset as compared to the other large homogenous datasets.", "## Preprocessing and Training Procedure\nPlease visit this link for the detailed procedure.", "## Usage\n- This model can be used for further finetuning for different NLP tasks using the Gujarati language.\n- It can be used to generate contextualised word representations for the Gujarati words.\n- It can be used for domain adaptation.\n- It can be used to predict the missing words from the Gujarati sentences.", "## Demo\n ### Using the model to predict missing words\n \n \n ### Using the model to generate contextualised word representations" ]
[ "TAGS\n#transformers #pytorch #tf #xlm-roberta #fill-mask #gu #autotrain_compatible #endpoints_compatible #region-us \n", "# Gujarati-XLM-R-Large\n\n\nThis model is finetuned over XLM-RoBERTa (XLM-R) using its large variant with the Gujarati language using the OSCAR monolingual dataset. We used the same masked language modelling (MLM) objective which was used for pretraining the XLM-R. As it is built over the pretrained XLM-R, we leveraged *Transfer Learning* by exploiting the knowledge from its parent model.", "## Dataset\nOSCAR corpus contains several diverse datasets for different languages. We followed the work of CamemBERT who reported better performance with this diverse dataset as compared to the other large homogenous datasets.", "## Preprocessing and Training Procedure\nPlease visit this link for the detailed procedure.", "## Usage\n- This model can be used for further finetuning for different NLP tasks using the Gujarati language.\n- It can be used to generate contextualised word representations for the Gujarati words.\n- It can be used for domain adaptation.\n- It can be used to predict the missing words from the Gujarati sentences.", "## Demo\n ### Using the model to predict missing words\n \n \n ### Using the model to generate contextualised word representations" ]
fill-mask
transformers
# Gujarati-in-Devanagari-XLM-R-Base This model is finetuned over [XLM-RoBERTa](https://huggingface.co/xlm-roberta-base) (XLM-R) using its base variant with the Gujarati language using the [OSCAR](https://oscar-corpus.com/) monolingual dataset. We converted the Gujarati script to the Devanagari using [Indic-NLP](https://github.com/anoopkunchukuttan/indic_nlp_library) library. For example, the sentence 'અમદાવાદ એ ગુજરાતનું એક શહેર છે.' was converted to 'अमदावाद ए गुजरातनुं एक शहेर छे.'. This helped to get better contextualised representations for some words as the XLM-R was pre-trained with several languages written in Devanagari script such as Hindi, Marathi, Sanskrit, and so on. We used the same masked language modelling (MLM) objective which was used for pretraining the XLM-R. As it is built over the pretrained XLM-R, we leveraged *Transfer Learning* by exploiting the knowledge from its parent model. ## Dataset OSCAR corpus contains several diverse datasets for different languages. We followed the work of [CamemBERT](https://www.aclweb.org/anthology/2020.acl-main.645/) who reported better performance with this diverse dataset as compared to the other large homogenous datasets. ## Preprocessing and Training Procedure Please visit [this link](https://github.com/ashwanitanwar/nmt-transfer-learning-xlm-r#6-finetuning-xlm-r) for the detailed procedure. ## Usage - This model can be used for further finetuning for different NLP tasks using the Gujarati language. - It can be used to generate contextualised word representations for the Gujarati words. - It can be used for domain adaptation. - It can be used to predict the missing words from the Gujarati sentences. ## Demo ### Using the model to predict missing words ``` from transformers import pipeline unmasker = pipeline('fill-mask', model='ashwani-tanwar/Gujarati-in-Devanagari-XLM-R-Base') pred_word = unmasker("अमदावाद ए गुजरातनुं एक <mask> छे.") print(pred_word) ``` ``` [{'sequence': '<s> अमदावाद ए गुजरातनुं एक नगर छे.</s>', 'score': 0.24843722581863403, 'token': 18576, 'token_str': '▁नगर'}, {'sequence': '<s> अमदावाद ए गुजरातनुं एक महानगर छे.</s>', 'score': 0.21455222368240356, 'token': 122519, 'token_str': '▁महानगर'}, {'sequence': '<s> अमदावाद ए गुजरातनुं एक राज्य छे.</s>', 'score': 0.16832049190998077, 'token': 10665, 'token_str': '▁राज्य'}, {'sequence': '<s> अमदावाद ए गुजरातनुं एक जिल्ला छे.</s>', 'score': 0.06764694303274155, 'token': 20396, 'token_str': '▁जिल्ला'}, {'sequence': '<s> अमदावाद ए गुजरातनुं एक शहर छे.</s>', 'score': 0.05364946648478508, 'token': 22770, 'token_str': '▁शहर'}] ``` ### Using the model to generate contextualised word representations ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("ashwani-tanwar/Gujarati-in-Devanagari-XLM-R-Base") model = AutoModel.from_pretrained("ashwani-tanwar/Gujarati-in-Devanagari-XLM-R-Base") sentence = "अमदावाद ए गुजरातनुं एक शहेर छे." encoded_sentence = tokenizer(sentence, return_tensors='pt') context_word_rep = model(**encoded_sentence) ```
{"language": "gu"}
ashwani-tanwar/Gujarati-in-Devanagari-XLM-R-Base
null
[ "transformers", "pytorch", "tf", "xlm-roberta", "fill-mask", "gu", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "gu" ]
TAGS #transformers #pytorch #tf #xlm-roberta #fill-mask #gu #autotrain_compatible #endpoints_compatible #region-us
# Gujarati-in-Devanagari-XLM-R-Base This model is finetuned over XLM-RoBERTa (XLM-R) using its base variant with the Gujarati language using the OSCAR monolingual dataset. We converted the Gujarati script to the Devanagari using Indic-NLP library. For example, the sentence 'અમદાવાદ એ ગુજરાતનું એક શહેર છે.' was converted to 'अमदावाद ए गुजरातनुं एक शहेर छे.'. This helped to get better contextualised representations for some words as the XLM-R was pre-trained with several languages written in Devanagari script such as Hindi, Marathi, Sanskrit, and so on. We used the same masked language modelling (MLM) objective which was used for pretraining the XLM-R. As it is built over the pretrained XLM-R, we leveraged *Transfer Learning* by exploiting the knowledge from its parent model. ## Dataset OSCAR corpus contains several diverse datasets for different languages. We followed the work of CamemBERT who reported better performance with this diverse dataset as compared to the other large homogenous datasets. ## Preprocessing and Training Procedure Please visit this link for the detailed procedure. ## Usage - This model can be used for further finetuning for different NLP tasks using the Gujarati language. - It can be used to generate contextualised word representations for the Gujarati words. - It can be used for domain adaptation. - It can be used to predict the missing words from the Gujarati sentences. ## Demo ### Using the model to predict missing words ### Using the model to generate contextualised word representations
[ "# Gujarati-in-Devanagari-XLM-R-Base\n\n\nThis model is finetuned over XLM-RoBERTa (XLM-R) using its base variant with the Gujarati language using the OSCAR monolingual dataset. We converted the Gujarati script to the Devanagari using Indic-NLP library. For example, the sentence 'અમદાવાદ એ ગુજરાતનું એક શહેર છે.' was converted to 'अमदावाद ए गुजरातनुं एक शहेर छे.'. This helped to get better contextualised representations for some words as the XLM-R was pre-trained with several languages written in Devanagari script such as Hindi, Marathi, Sanskrit, and so on. \n\nWe used the same masked language modelling (MLM) objective which was used for pretraining the XLM-R. As it is built over the pretrained XLM-R, we leveraged *Transfer Learning* by exploiting the knowledge from its parent model.", "## Dataset\nOSCAR corpus contains several diverse datasets for different languages. We followed the work of CamemBERT who reported better performance with this diverse dataset as compared to the other large homogenous datasets.", "## Preprocessing and Training Procedure\nPlease visit this link for the detailed procedure.", "## Usage\n- This model can be used for further finetuning for different NLP tasks using the Gujarati language.\n- It can be used to generate contextualised word representations for the Gujarati words.\n- It can be used for domain adaptation.\n- It can be used to predict the missing words from the Gujarati sentences.", "## Demo\n ### Using the model to predict missing words\n \n \n ### Using the model to generate contextualised word representations" ]
[ "TAGS\n#transformers #pytorch #tf #xlm-roberta #fill-mask #gu #autotrain_compatible #endpoints_compatible #region-us \n", "# Gujarati-in-Devanagari-XLM-R-Base\n\n\nThis model is finetuned over XLM-RoBERTa (XLM-R) using its base variant with the Gujarati language using the OSCAR monolingual dataset. We converted the Gujarati script to the Devanagari using Indic-NLP library. For example, the sentence 'અમદાવાદ એ ગુજરાતનું એક શહેર છે.' was converted to 'अमदावाद ए गुजरातनुं एक शहेर छे.'. This helped to get better contextualised representations for some words as the XLM-R was pre-trained with several languages written in Devanagari script such as Hindi, Marathi, Sanskrit, and so on. \n\nWe used the same masked language modelling (MLM) objective which was used for pretraining the XLM-R. As it is built over the pretrained XLM-R, we leveraged *Transfer Learning* by exploiting the knowledge from its parent model.", "## Dataset\nOSCAR corpus contains several diverse datasets for different languages. We followed the work of CamemBERT who reported better performance with this diverse dataset as compared to the other large homogenous datasets.", "## Preprocessing and Training Procedure\nPlease visit this link for the detailed procedure.", "## Usage\n- This model can be used for further finetuning for different NLP tasks using the Gujarati language.\n- It can be used to generate contextualised word representations for the Gujarati words.\n- It can be used for domain adaptation.\n- It can be used to predict the missing words from the Gujarati sentences.", "## Demo\n ### Using the model to predict missing words\n \n \n ### Using the model to generate contextualised word representations" ]
fill-mask
transformers
# Indo-Aryan-XLM-R-Base This model is finetuned over [XLM-RoBERTa](https://huggingface.co/xlm-roberta-base) (XLM-R) using its base variant with the Hindi, Gujarati, Marathi, and Bengali languages from the Indo-Aryan family using the [OSCAR](https://oscar-corpus.com/) monolingual datasets. As these languages had imbalanced datasets, we used resampling strategies as used in pretraining the XLM-R to balance the resulting dataset after combining these languages. We used the same masked language modelling (MLM) objective which was used for pretraining the XLM-R. As it is built over the pretrained XLM-R, we leveraged *Transfer Learning* by exploiting the knowledge from its parent model. ## Dataset OSCAR corpus contains several diverse datasets for different languages. We followed the work of [CamemBERT](https://www.aclweb.org/anthology/2020.acl-main.645/) who reported better performance with this diverse dataset as compared to the other large homogenous datasets. ## Preprocessing and Training Procedure Please visit [this link](https://github.com/ashwanitanwar/nmt-transfer-learning-xlm-r#6-finetuning-xlm-r) for the detailed procedure. ## Usage - This model can be used for further finetuning for different NLP tasks using the Hindi, Gujarati, Marathi, and Bengali languages. - It can be used to generate contextualised word representations for the words from the above languages. - It can be used for domain adaptation. - It can be used to predict the missing words from their sentences. ## Demo ### Using the model to predict missing words ``` from transformers import pipeline unmasker = pipeline('fill-mask', model='ashwani-tanwar/Indo-Aryan-XLM-R-Base') pred_word = unmasker("અમદાવાદ એ ગુજરાતનું એક <mask> છે.") print(pred_word) ``` ``` [{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક શહેર છે.</s>', 'score': 0.7811868786811829, 'token': 85227, 'token_str': '▁શહેર'}, {'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક ગામ છે.</s>', 'score': 0.055032357573509216, 'token': 66346, 'token_str': '▁ગામ'}, {'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક નામ છે.</s>', 'score': 0.0287721399217844, 'token': 29565, 'token_str': '▁નામ'}, {'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક રાજ્ય છે.</s>', 'score': 0.02565067447721958, 'token': 63678, 'token_str': '▁રાજ્ય'}, {'sequence': '<s> અમદાવાદ એ ગુજરાતનું એકનગર છે.</s>', 'score': 0.022877279669046402, 'token': 69702, 'token_str': 'નગર'}] ``` ### Using the model to generate contextualised word representations ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("ashwani-tanwar/Indo-Aryan-XLM-R-Base") model = AutoModel.from_pretrained("ashwani-tanwar/Indo-Aryan-XLM-R-Base") sentence = "અમદાવાદ એ ગુજરાતનું એક શહેર છે." encoded_sentence = tokenizer(sentence, return_tensors='pt') context_word_rep = model(**encoded_sentence) ```
{"language": ["gu", "hi", "mr", "bn"]}
ashwani-tanwar/Indo-Aryan-XLM-R-Base
null
[ "transformers", "pytorch", "tf", "xlm-roberta", "fill-mask", "gu", "hi", "mr", "bn", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "gu", "hi", "mr", "bn" ]
TAGS #transformers #pytorch #tf #xlm-roberta #fill-mask #gu #hi #mr #bn #autotrain_compatible #endpoints_compatible #region-us
# Indo-Aryan-XLM-R-Base This model is finetuned over XLM-RoBERTa (XLM-R) using its base variant with the Hindi, Gujarati, Marathi, and Bengali languages from the Indo-Aryan family using the OSCAR monolingual datasets. As these languages had imbalanced datasets, we used resampling strategies as used in pretraining the XLM-R to balance the resulting dataset after combining these languages. We used the same masked language modelling (MLM) objective which was used for pretraining the XLM-R. As it is built over the pretrained XLM-R, we leveraged *Transfer Learning* by exploiting the knowledge from its parent model. ## Dataset OSCAR corpus contains several diverse datasets for different languages. We followed the work of CamemBERT who reported better performance with this diverse dataset as compared to the other large homogenous datasets. ## Preprocessing and Training Procedure Please visit this link for the detailed procedure. ## Usage - This model can be used for further finetuning for different NLP tasks using the Hindi, Gujarati, Marathi, and Bengali languages. - It can be used to generate contextualised word representations for the words from the above languages. - It can be used for domain adaptation. - It can be used to predict the missing words from their sentences. ## Demo ### Using the model to predict missing words ### Using the model to generate contextualised word representations
[ "# Indo-Aryan-XLM-R-Base\n\n\nThis model is finetuned over XLM-RoBERTa (XLM-R) using its base variant with the Hindi, Gujarati, Marathi, and Bengali languages from the Indo-Aryan family using the OSCAR monolingual datasets. As these languages had imbalanced datasets, we used resampling strategies as used in pretraining the XLM-R to balance the resulting dataset after combining these languages. We used the same masked language modelling (MLM) objective which was used for pretraining the XLM-R. As it is built over the pretrained XLM-R, we leveraged *Transfer Learning* by exploiting the knowledge from its parent model.", "## Dataset\nOSCAR corpus contains several diverse datasets for different languages. We followed the work of CamemBERT who reported better performance with this diverse dataset as compared to the other large homogenous datasets.", "## Preprocessing and Training Procedure\nPlease visit this link for the detailed procedure.", "## Usage\n- This model can be used for further finetuning for different NLP tasks using the Hindi, Gujarati, Marathi, and Bengali languages.\n- It can be used to generate contextualised word representations for the words from the above languages.\n- It can be used for domain adaptation.\n- It can be used to predict the missing words from their sentences.", "## Demo\n ### Using the model to predict missing words\n \n \n ### Using the model to generate contextualised word representations" ]
[ "TAGS\n#transformers #pytorch #tf #xlm-roberta #fill-mask #gu #hi #mr #bn #autotrain_compatible #endpoints_compatible #region-us \n", "# Indo-Aryan-XLM-R-Base\n\n\nThis model is finetuned over XLM-RoBERTa (XLM-R) using its base variant with the Hindi, Gujarati, Marathi, and Bengali languages from the Indo-Aryan family using the OSCAR monolingual datasets. As these languages had imbalanced datasets, we used resampling strategies as used in pretraining the XLM-R to balance the resulting dataset after combining these languages. We used the same masked language modelling (MLM) objective which was used for pretraining the XLM-R. As it is built over the pretrained XLM-R, we leveraged *Transfer Learning* by exploiting the knowledge from its parent model.", "## Dataset\nOSCAR corpus contains several diverse datasets for different languages. We followed the work of CamemBERT who reported better performance with this diverse dataset as compared to the other large homogenous datasets.", "## Preprocessing and Training Procedure\nPlease visit this link for the detailed procedure.", "## Usage\n- This model can be used for further finetuning for different NLP tasks using the Hindi, Gujarati, Marathi, and Bengali languages.\n- It can be used to generate contextualised word representations for the words from the above languages.\n- It can be used for domain adaptation.\n- It can be used to predict the missing words from their sentences.", "## Demo\n ### Using the model to predict missing words\n \n \n ### Using the model to generate contextualised word representations" ]
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
ashwinchandran13/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Harry Potter DialoGPT Model
[ "# Harry Potter DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Harry Potter DialoGPT Model" ]
text-generation
transformers
<img src="https://raw.githubusercontent.com/AntoineSimoulin/gpt-fr/main/imgs/logo.png" width="200"> ## Model description **GPT-fr** 🇫🇷 is a GPT model for French developped by [Quantmetry](https://www.quantmetry.com/) and the [Laboratoire de Linguistique Formelle (LLF)](http://www.llf.cnrs.fr/en). We train the model on a very large and heterogeneous French corpus. We release the weights for the following configurations: | Model name | Number of layers | Attention Heads | Embedding Dimension | Total Parameters | | :------: | :---: | :---: | :---: | :---: | | `gpt-fr-cased-small` | 12 | 12 | 768 | 124 M | | `gpt-fr-cased-base` | 24 | 14 | 1,792 | 1,017 B | ## Intended uses & limitations The model can be leveraged for language generation tasks. Besides, many tasks may be formatted such that the output is directly generated in natural language. Such configuration may be used for tasks such as automatic summary or question answering. We do hope our model might be used for both academic and industrial applications. #### How to use The model might be used through the astonishing 🤗 `Transformers` librairie. We use the work from [Shoeybi et al., (2019)](#shoeybi-2019) and calibrate our model such that during pre-training or fine-tuning, the model can fit on a single NVIDIA V100 32GB GPU. ```python from transformers import GPT2Tokenizer, GPT2LMHeadModel # Load pretrained model and tokenizer model = GPT2LMHeadModel.from_pretrained("asi/gpt-fr-cased-base") tokenizer = GPT2Tokenizer.from_pretrained("asi/gpt-fr-cased-base") # Generate a sample of text model.eval() input_sentence = "Longtemps je me suis couché de bonne heure." input_ids = tokenizer.encode(input_sentence, return_tensors='pt') beam_outputs = model.generate( input_ids, max_length=100, do_sample=True, top_k=50, top_p=0.95, num_return_sequences=1 ) print("Output:\n" + 100 * '-') print(tokenizer.decode(beam_outputs[0], skip_special_tokens=True)) ``` #### Limitations and bias Large language models tend to replicate the biases found in pre-training datasets, such as gender discrimination or offensive content generation. To limit exposition to too much explicit material, we carefully choose the sources beforehand. This process — detailed in our paper — aims to limit offensive content generation from the model without performing manual and arbitrary filtering. However, some societal biases, contained in the data, might be reflected by the model. For example on gender equality, we generated the following sentence sequence "Ma femme/Mon mari vient d'obtenir un nouveau poste en tant \_\_\_\_\_\_\_". We used top-k random sampling strategy with k=50 and stopped at the first punctuation element. The positions generated for the wife is '_que professeur de français._' while the position for the husband is '_que chef de projet._'. We do appreciate your feedback to better qualitatively and quantitatively assess such effects. ## Training data We created a dedicated corpus to train our generative model. Indeed the model uses a fixed-length context size of 1,024 and require long documents to be trained. We aggregated existing corpora: [Wikipedia](https://dumps.wikimedia.org/frwiki/), [OpenSubtitle](http://opus.nlpl.eu/download.php?f=OpenSubtitles/v2016/mono/) ([Tiedemann, 2012](#tiedemann-2012)), [Gutenberg](http://www.gutenberg.org) and [Common Crawl](http://data.statmt.org/ngrams/deduped2017/) ([Li et al., 2019](li-2019)). Corpora are filtered and separated into sentences. Successive sentences are then concatenated within the limit of 1,024 tokens per document. ## Training procedure We pre-trained the model on the new CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/eng/jean-zay/) supercomputer. We perform the training within a total of 140 hours of computation on Tesla V-100 hardware (TDP of 300W). The training was distributed on 4 compute nodes of 8 GPUs. We used data parallelization in order to divide each micro-batch on the computing units. We estimated the total emissions at 580.61 kgCO2eq, using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al., (2019)](lacoste-2019). ## Eval results We packaged **GPT-fr** with a dedicated language model evaluation benchmark for French. In line with the [WikiText](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) benchmark in English, we collected over 70 million tokens from the set of verified [good](https://fr.wikipedia.org/wiki/Wikip%C3%A9dia:Articles_de_qualit%C3%A9) and [featured](https://fr.wikipedia.org/wiki/Wikip%C3%A9dia:Bons_articles) articles on Wikipedia. The model reaches a zero-shot perplexity of **12.9** on the test set. ### BibTeX entry and citation info Along with the model hosted by HuggingFace transformers library, we maintain a [git repository](https://github.com/AntoineSimoulin/gpt-fr). If you use **GPT-fr** for your scientific publications or your industrial applications, please cite the following paper: ```bibtex @inproceedings{simoulin:hal-03265900, TITLE = {{Un mod{\`e}le Transformer G{\'e}n{\'e}ratif Pr{\'e}-entrain{\'e} pour le \_\_\_\_\_\_ fran{\c c}ais}}, AUTHOR = {Simoulin, Antoine and Crabb{\'e}, Benoit}, URL = {https://hal.archives-ouvertes.fr/hal-03265900}, BOOKTITLE = {{Traitement Automatique des Langues Naturelles}}, ADDRESS = {Lille, France}, EDITOR = {Denis, Pascal and Grabar, Natalia and Fraisse, Amel and Cardon, R{\'e}mi and Jacquemin, Bernard and Kergosien, Eric and Balvet, Antonio}, PUBLISHER = {{ATALA}}, PAGES = {246-255}, YEAR = {2021}, KEYWORDS = {fran{\c c}ais. ; GPT ; G{\'e}n{\'e}ratif ; Transformer ; Pr{\'e}-entra{\^i}n{\'e}}, PDF = {https://hal.archives-ouvertes.fr/hal-03265900/file/7.pdf}, HAL_ID = {hal-03265900}, HAL_VERSION = {v1}, } ``` ### References ><div name="tiedemann-2012">Jörg Tiedemann: Parallel Data, Tools and Interfaces in OPUS. LREC 2012: 2214-2218</div> ><div name="li-2019">Xian Li, Paul Michel, Antonios Anastasopoulos, Yonatan Belinkov, Nadir Durrani, Orhan Firat, Philipp Koehn, Graham Neubig, Juan Pino, Hassan Sajjad: Findings of the First Shared Task on Machine Translation Robustness. WMT (2) 2019: 91-102</div> ><div name="shoeybi-2019">Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, Bryan Catanzaro: Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism. CoRR abs/1909.08053 (2019)</div> ><div name="lacoste-2019">Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, Thomas Dandres: Quantifying the Carbon Emissions of Machine Learning. CoRR abs/1910.09700 (2019)</div>
{"language": ["fr"], "license": "apache-2.0", "tags": ["tf", "pytorch", "gpt2", "text-generation"], "thumbnail": "https://raw.githubusercontent.com/AntoineSimoulin/gpt-fr/main/imgs/logo.png", "model-index": [{"name": "asi/gpt-fr-cased-base", "results": [{"task": {"type": "text-generation", "name": "Wikitext-fr"}, "dataset": {"name": "Wikitext-fr", "type": "wikitext_fr"}, "metrics": [{"type": "perplexity", "value": 12.9, "name": "Perplexity"}]}, {"task": {"type": "text-classification", "name": "FLUE"}, "dataset": {"name": "CLS-Books", "type": "flue", "split": "CLS"}, "metrics": [{"type": "accuracy", "value": 91.6, "name": "Accuracy"}, {"type": "accuracy", "value": 91.4, "name": "Accuracy"}, {"type": "accuracy", "value": 92.6, "name": "Accuracy"}]}, {"task": {"type": "text-classification", "name": "FLUE"}, "dataset": {"name": "PAWS-X", "type": "flue", "split": "PAWS-X"}, "metrics": [{"type": "accuracy", "value": 86.3, "name": "Accuracy"}]}, {"task": {"type": "text-classification", "name": "FLUE"}, "dataset": {"name": "XNLI", "type": "flue", "split": "XNLI"}, "metrics": [{"type": "accuracy", "value": 77.9, "name": "Accuracy"}]}, {"task": {"type": "summarization", "name": "OrangeSum"}, "dataset": {"name": "OrangeSum-Abstract", "type": "orange_sum", "split": "abstract"}, "metrics": [{"type": "rouge", "value": 16.6, "name": "ROUGE-1"}, {"type": "rouge", "value": 3.4, "name": "ROUGE-2"}, {"type": "rouge", "value": 11.5, "name": "ROUGE-L"}]}, {"task": {"type": "summarization", "name": "OrangeSum"}, "dataset": {"name": "OrangeSum-Title", "type": "orange_sum", "split": "title"}, "metrics": [{"type": "rouge", "value": 10.2, "name": "ROUGE-1"}, {"type": "rouge", "value": 2.6, "name": "ROUGE-2"}, {"type": "rouge", "value": 8.4, "name": "ROUGE-L"}]}]}]}
asi/gpt-fr-cased-base
null
[ "transformers", "pytorch", "tf", "jax", "gpt2", "text-generation", "fr", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "fr" ]
TAGS #transformers #pytorch #tf #jax #gpt2 #text-generation #fr #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
<img src="URL width="200"> Model description ----------------- GPT-fr 🇫🇷 is a GPT model for French developped by Quantmetry and the Laboratoire de Linguistique Formelle (LLF). We train the model on a very large and heterogeneous French corpus. We release the weights for the following configurations: Intended uses & limitations --------------------------- The model can be leveraged for language generation tasks. Besides, many tasks may be formatted such that the output is directly generated in natural language. Such configuration may be used for tasks such as automatic summary or question answering. We do hope our model might be used for both academic and industrial applications. #### How to use The model might be used through the astonishing 'Transformers' librairie. We use the work from Shoeybi et al., (2019) and calibrate our model such that during pre-training or fine-tuning, the model can fit on a single NVIDIA V100 32GB GPU. #### Limitations and bias Large language models tend to replicate the biases found in pre-training datasets, such as gender discrimination or offensive content generation. To limit exposition to too much explicit material, we carefully choose the sources beforehand. This process — detailed in our paper — aims to limit offensive content generation from the model without performing manual and arbitrary filtering. However, some societal biases, contained in the data, might be reflected by the model. For example on gender equality, we generated the following sentence sequence "Ma femme/Mon mari vient d'obtenir un nouveau poste en tant \_\_\_\_\_\_\_". We used top-k random sampling strategy with k=50 and stopped at the first punctuation element. The positions generated for the wife is '*que professeur de français.*' while the position for the husband is '*que chef de projet.*'. We do appreciate your feedback to better qualitatively and quantitatively assess such effects. Training data ------------- We created a dedicated corpus to train our generative model. Indeed the model uses a fixed-length context size of 1,024 and require long documents to be trained. We aggregated existing corpora: Wikipedia, OpenSubtitle (Tiedemann, 2012), Gutenberg and Common Crawl (Li et al., 2019). Corpora are filtered and separated into sentences. Successive sentences are then concatenated within the limit of 1,024 tokens per document. Training procedure ------------------ We pre-trained the model on the new CNRS (French National Centre for Scientific Research) Jean Zay supercomputer. We perform the training within a total of 140 hours of computation on Tesla V-100 hardware (TDP of 300W). The training was distributed on 4 compute nodes of 8 GPUs. We used data parallelization in order to divide each micro-batch on the computing units. We estimated the total emissions at 580.61 kgCO2eq, using the Machine Learning Impact calculator presented in Lacoste et al., (2019). Eval results ------------ We packaged GPT-fr with a dedicated language model evaluation benchmark for French. In line with the WikiText benchmark in English, we collected over 70 million tokens from the set of verified good and featured articles on Wikipedia. The model reaches a zero-shot perplexity of 12.9 on the test set. ### BibTeX entry and citation info Along with the model hosted by HuggingFace transformers library, we maintain a git repository. If you use GPT-fr for your scientific publications or your industrial applications, please cite the following paper: ### References > > Jörg Tiedemann: Parallel Data, Tools and Interfaces in OPUS. LREC 2012: 2214-2218 > > > Xian Li, Paul Michel, Antonios Anastasopoulos, Yonatan Belinkov, Nadir Durrani, Orhan Firat, Philipp Koehn, Graham Neubig, Juan Pino, Hassan Sajjad: Findings of the First Shared Task on Machine Translation Robustness. WMT (2) 2019: 91-102 > > > Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, Bryan Catanzaro: Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism. CoRR abs/1909.08053 (2019) > > > Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, Thomas Dandres: Quantifying the Carbon Emissions of Machine Learning. CoRR abs/1910.09700 (2019) >
[ "#### How to use\n\n\nThe model might be used through the astonishing 'Transformers' librairie. We use the work from Shoeybi et al., (2019) and calibrate our model such that during pre-training or fine-tuning, the model can fit on a single NVIDIA V100 32GB GPU.", "#### Limitations and bias\n\n\nLarge language models tend to replicate the biases found in pre-training datasets, such as gender discrimination or offensive content generation.\n\n\nTo limit exposition to too much explicit material, we carefully choose the sources beforehand. This process — detailed in our paper — aims to limit offensive content generation from the model without performing manual and arbitrary filtering.\n\n\nHowever, some societal biases, contained in the data, might be reflected by the model. For example on gender equality, we generated the following sentence sequence \"Ma femme/Mon mari vient d'obtenir un nouveau poste en tant \\_\\_\\_\\_\\_\\_\\_\". We used top-k random sampling strategy with k=50 and stopped at the first punctuation element.\nThe positions generated for the wife is '*que professeur de français.*' while the position for the husband is '*que chef de projet.*'. We do appreciate your feedback to better qualitatively and quantitatively assess such effects.\n\n\nTraining data\n-------------\n\n\nWe created a dedicated corpus to train our generative model. Indeed the model uses a fixed-length context size of 1,024 and require long documents to be trained. We aggregated existing corpora: Wikipedia, OpenSubtitle (Tiedemann, 2012), Gutenberg and Common Crawl (Li et al., 2019). Corpora are filtered and separated into sentences. Successive sentences are then concatenated within the limit of 1,024 tokens per document.\n\n\nTraining procedure\n------------------\n\n\nWe pre-trained the model on the new CNRS (French National Centre for Scientific Research) Jean Zay supercomputer. We perform the training within a total of 140 hours of computation on Tesla V-100 hardware (TDP of 300W). The training was distributed on 4 compute nodes of 8 GPUs. We used data parallelization in order to divide each micro-batch on the computing units. We estimated the total emissions at 580.61 kgCO2eq, using the Machine Learning Impact calculator presented in Lacoste et al., (2019).\n\n\nEval results\n------------\n\n\nWe packaged GPT-fr with a dedicated language model evaluation benchmark for French.\nIn line with the WikiText benchmark in English, we collected over 70 million tokens from the set of verified good and featured articles on Wikipedia. The model reaches a zero-shot perplexity of 12.9 on the test set.", "### BibTeX entry and citation info\n\n\nAlong with the model hosted by HuggingFace transformers library, we maintain a git repository.\nIf you use GPT-fr for your scientific publications or your industrial applications, please cite the following paper:", "### References\n\n\n\n> \n> Jörg Tiedemann: Parallel Data, Tools and Interfaces in OPUS. LREC 2012: 2214-2218\n> \n\n\n\n> \n> Xian Li, Paul Michel, Antonios Anastasopoulos, Yonatan Belinkov, Nadir Durrani, Orhan Firat, Philipp Koehn, Graham Neubig, Juan Pino, Hassan Sajjad: Findings of the First Shared Task on Machine Translation Robustness. WMT (2) 2019: 91-102\n> \n\n\n\n> \n> Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, Bryan Catanzaro: Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism. CoRR abs/1909.08053 (2019)\n> \n\n\n\n> \n> Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, Thomas Dandres: Quantifying the Carbon Emissions of Machine Learning. CoRR abs/1910.09700 (2019)\n>" ]
[ "TAGS\n#transformers #pytorch #tf #jax #gpt2 #text-generation #fr #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "#### How to use\n\n\nThe model might be used through the astonishing 'Transformers' librairie. We use the work from Shoeybi et al., (2019) and calibrate our model such that during pre-training or fine-tuning, the model can fit on a single NVIDIA V100 32GB GPU.", "#### Limitations and bias\n\n\nLarge language models tend to replicate the biases found in pre-training datasets, such as gender discrimination or offensive content generation.\n\n\nTo limit exposition to too much explicit material, we carefully choose the sources beforehand. This process — detailed in our paper — aims to limit offensive content generation from the model without performing manual and arbitrary filtering.\n\n\nHowever, some societal biases, contained in the data, might be reflected by the model. For example on gender equality, we generated the following sentence sequence \"Ma femme/Mon mari vient d'obtenir un nouveau poste en tant \\_\\_\\_\\_\\_\\_\\_\". We used top-k random sampling strategy with k=50 and stopped at the first punctuation element.\nThe positions generated for the wife is '*que professeur de français.*' while the position for the husband is '*que chef de projet.*'. We do appreciate your feedback to better qualitatively and quantitatively assess such effects.\n\n\nTraining data\n-------------\n\n\nWe created a dedicated corpus to train our generative model. Indeed the model uses a fixed-length context size of 1,024 and require long documents to be trained. We aggregated existing corpora: Wikipedia, OpenSubtitle (Tiedemann, 2012), Gutenberg and Common Crawl (Li et al., 2019). Corpora are filtered and separated into sentences. Successive sentences are then concatenated within the limit of 1,024 tokens per document.\n\n\nTraining procedure\n------------------\n\n\nWe pre-trained the model on the new CNRS (French National Centre for Scientific Research) Jean Zay supercomputer. We perform the training within a total of 140 hours of computation on Tesla V-100 hardware (TDP of 300W). The training was distributed on 4 compute nodes of 8 GPUs. We used data parallelization in order to divide each micro-batch on the computing units. We estimated the total emissions at 580.61 kgCO2eq, using the Machine Learning Impact calculator presented in Lacoste et al., (2019).\n\n\nEval results\n------------\n\n\nWe packaged GPT-fr with a dedicated language model evaluation benchmark for French.\nIn line with the WikiText benchmark in English, we collected over 70 million tokens from the set of verified good and featured articles on Wikipedia. The model reaches a zero-shot perplexity of 12.9 on the test set.", "### BibTeX entry and citation info\n\n\nAlong with the model hosted by HuggingFace transformers library, we maintain a git repository.\nIf you use GPT-fr for your scientific publications or your industrial applications, please cite the following paper:", "### References\n\n\n\n> \n> Jörg Tiedemann: Parallel Data, Tools and Interfaces in OPUS. LREC 2012: 2214-2218\n> \n\n\n\n> \n> Xian Li, Paul Michel, Antonios Anastasopoulos, Yonatan Belinkov, Nadir Durrani, Orhan Firat, Philipp Koehn, Graham Neubig, Juan Pino, Hassan Sajjad: Findings of the First Shared Task on Machine Translation Robustness. WMT (2) 2019: 91-102\n> \n\n\n\n> \n> Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, Bryan Catanzaro: Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism. CoRR abs/1909.08053 (2019)\n> \n\n\n\n> \n> Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, Thomas Dandres: Quantifying the Carbon Emissions of Machine Learning. CoRR abs/1910.09700 (2019)\n>" ]
text-generation
transformers
<img src="https://raw.githubusercontent.com/AntoineSimoulin/gpt-fr/main/imgs/logo.png" width="200"> ## Model description **GPT-fr** 🇫🇷 is a GPT model for French developped by [Quantmetry](https://www.quantmetry.com/) and the [Laboratoire de Linguistique Formelle (LLF)](http://www.llf.cnrs.fr/en). We train the model on a very large and heterogeneous French corpus. We release the weights for the following configurations: | Model name | Number of layers | Attention Heads | Embedding Dimension | Total Parameters | | :------: | :---: | :---: | :---: | :---: | | `gpt-fr-cased-small` | 12 | 12 | 768 | 124 M | | `gpt-fr-cased-base` | 24 | 14 | 1,792 | 1,017 B | ## Intended uses & limitations The model can be leveraged for language generation tasks. Besides, many tasks may be formatted such that the output is directly generated in natural language. Such configuration may be used for tasks such as automatic summary or question answering. We do hope our model might be used for both academic and industrial applications. #### How to use The model might be used through the astonishing 🤗 `Transformers` librairie: ```python from transformers import GPT2Tokenizer, GPT2LMHeadModel # Load pretrained model and tokenizer model = GPT2LMHeadModel.from_pretrained("asi/gpt-fr-cased-small") tokenizer = GPT2Tokenizer.from_pretrained("asi/gpt-fr-cased-small") # Generate a sample of text model.eval() input_sentence = "Longtemps je me suis couché de bonne heure." input_ids = tokenizer.encode(input_sentence, return_tensors='pt') beam_outputs = model.generate( input_ids, max_length=100, do_sample=True, top_k=50, top_p=0.95, num_return_sequences=1 ) print("Output:\n" + 100 * '-') print(tokenizer.decode(beam_outputs[0], skip_special_tokens=True)) ``` #### Limitations and bias Large language models tend to replicate the biases found in pre-training datasets, such as gender discrimination or offensive content generation. To limit exposition to too much explicit material, we carefully choose the sources beforehand. This process — detailed in our paper — aims to limit offensive content generation from the model without performing manual and arbitrary filtering. However, some societal biases, contained in the data, might be reflected by the model. For example on gender equality, we generated the following sentence sequence "Ma femme/Mon mari vient d'obtenir un nouveau poste. A partir de demain elle/il sera \_\_\_\_\_\_\_" and observed the model generated distinct positions given the subject gender. We used top-k random sampling strategy with k=50 and stopped at the first punctuation element. The positions generated for the wife is '_femme de ménage de la maison_' while the position for the husband is '_à la tête de la police_'. We do appreciate your feedback to better qualitatively and quantitatively assess such effects. ## Training data We created a dedicated corpus to train our generative model. Indeed the model uses a fixed-length context size of 1,024 and require long documents to be trained. We aggregated existing corpora: [Wikipedia](https://dumps.wikimedia.org/frwiki/), [OpenSubtitle](http://opus.nlpl.eu/download.php?f=OpenSubtitles/v2016/mono/) ([Tiedemann, 2012](#tiedemann-2012)), [Gutenberg](http://www.gutenberg.org). Corpora are filtered and separated into sentences. Successive sentences are then concatenated within the limit of 1,024 tokens per document. ## Training procedure We pre-trained the model on a TPU v2-8 using the amazing [Google Colab](https://colab.research.google.com) inter-server. ## Eval results We packaged **GPT-fr** with a dedicated language model evaluation benchmark. In line with the [WikiText](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) benchmark in English, we collected over 70 million tokens from the set of verified [good](https://fr.wikipedia.org/wiki/Wikip%C3%A9dia:Articles_de_qualit%C3%A9) and [featured](https://fr.wikipedia.org/wiki/Wikip%C3%A9dia:Bons_articles) articles on French Wikipedia. The model reaches a zero-shot perplexity of **109.2** on the test set. ### BibTeX entry and citation info Along with the model hosted by HuggingFace transformers library, we maintain a [git repository](https://github.com/AntoineSimoulin/gpt-fr). If you use **GPT-fr** for your scientific publications or your industrial applications, please cite the following paper: ```bibtex @inproceedings{simoulin:hal-03265900, TITLE = {{Un mod{\`e}le Transformer G{\'e}n{\'e}ratif Pr{\'e}-entrain{\'e} pour le \_\_\_\_\_\_ fran{\c c}ais}}, AUTHOR = {Simoulin, Antoine and Crabb{\'e}, Benoit}, URL = {https://hal.archives-ouvertes.fr/hal-03265900}, BOOKTITLE = {{Traitement Automatique des Langues Naturelles}}, ADDRESS = {Lille, France}, EDITOR = {Denis, Pascal and Grabar, Natalia and Fraisse, Amel and Cardon, R{\'e}mi and Jacquemin, Bernard and Kergosien, Eric and Balvet, Antonio}, PUBLISHER = {{ATALA}}, PAGES = {246-255}, YEAR = {2021}, KEYWORDS = {fran{\c c}ais. ; GPT ; G{\'e}n{\'e}ratif ; Transformer ; Pr{\'e}-entra{\^i}n{\'e}}, PDF = {https://hal.archives-ouvertes.fr/hal-03265900/file/7.pdf}, HAL_ID = {hal-03265900}, HAL_VERSION = {v1}, } ``` ### References ><div name="tiedemann-2012">Jörg Tiedemann: Parallel Data, Tools and Interfaces in OPUS. LREC 2012: 2214-2218</div>
{"language": ["fr"], "license": "apache-2.0", "tags": ["tf", "pytorch", "gpt2", "text-generation"], "thumbnail": "https://raw.githubusercontent.com/AntoineSimoulin/gpt-fr/main/imgs/logo.png", "model-index": [{"name": "asi/gpt-fr-cased-base", "results": [{"task": {"type": "text-generation", "name": "Wikitext-fr"}, "dataset": {"name": "Wikitext-fr", "type": "wikitext_fr"}, "metrics": [{"type": "perplexity", "value": 109.2, "name": "Perplexity"}]}, {"task": {"type": "text-classification", "name": "FLUE"}, "dataset": {"name": "CLS-Books", "type": "flue", "split": "CLS"}, "metrics": [{"type": "accuracy", "value": 88.3, "name": "Accuracy"}, {"type": "accuracy", "value": 86.9, "name": "Accuracy"}, {"type": "accuracy", "value": 89.3, "name": "Accuracy"}]}, {"task": {"type": "text-classification", "name": "FLUE"}, "dataset": {"name": "PAWS-X", "type": "flue", "split": "PAWS-X"}, "metrics": [{"type": "accuracy", "value": 83.3, "name": "Accuracy"}]}, {"task": {"type": "text-classification", "name": "FLUE"}, "dataset": {"name": "XNLI", "type": "flue", "split": "XNLI"}, "metrics": [{"type": "accuracy", "value": 75.6, "name": "Accuracy"}]}, {"task": {"type": "summarization", "name": "OrangeSum"}, "dataset": {"name": "OrangeSum-Abstract", "type": "orange_sum", "split": "abstract"}, "metrics": [{"type": "rouge", "value": 17.5, "name": "ROUGE-1"}, {"type": "rouge", "value": 3.1, "name": "ROUGE-2"}, {"type": "rouge", "value": 12.1, "name": "ROUGE-L"}]}, {"task": {"type": "summarization", "name": "OrangeSum"}, "dataset": {"name": "OrangeSum-Title", "type": "orange_sum", "split": "title"}, "metrics": [{"type": "rouge", "value": 13.9, "name": "ROUGE-1"}, {"type": "rouge", "value": 2.3, "name": "ROUGE-2"}, {"type": "rouge", "value": 9.7, "name": "ROUGE-L"}]}]}]}
asi/gpt-fr-cased-small
null
[ "transformers", "pytorch", "tf", "jax", "gpt2", "text-generation", "fr", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "fr" ]
TAGS #transformers #pytorch #tf #jax #gpt2 #text-generation #fr #license-apache-2.0 #model-index #endpoints_compatible #has_space #text-generation-inference #region-us
<img src="URL width="200"> Model description ----------------- GPT-fr 🇫🇷 is a GPT model for French developped by Quantmetry and the Laboratoire de Linguistique Formelle (LLF). We train the model on a very large and heterogeneous French corpus. We release the weights for the following configurations: Intended uses & limitations --------------------------- The model can be leveraged for language generation tasks. Besides, many tasks may be formatted such that the output is directly generated in natural language. Such configuration may be used for tasks such as automatic summary or question answering. We do hope our model might be used for both academic and industrial applications. #### How to use The model might be used through the astonishing 'Transformers' librairie: #### Limitations and bias Large language models tend to replicate the biases found in pre-training datasets, such as gender discrimination or offensive content generation. To limit exposition to too much explicit material, we carefully choose the sources beforehand. This process — detailed in our paper — aims to limit offensive content generation from the model without performing manual and arbitrary filtering. However, some societal biases, contained in the data, might be reflected by the model. For example on gender equality, we generated the following sentence sequence "Ma femme/Mon mari vient d'obtenir un nouveau poste. A partir de demain elle/il sera \_\_\_\_\_\_\_" and observed the model generated distinct positions given the subject gender. We used top-k random sampling strategy with k=50 and stopped at the first punctuation element. The positions generated for the wife is '*femme de ménage de la maison*' while the position for the husband is '*à la tête de la police*'. We do appreciate your feedback to better qualitatively and quantitatively assess such effects. Training data ------------- We created a dedicated corpus to train our generative model. Indeed the model uses a fixed-length context size of 1,024 and require long documents to be trained. We aggregated existing corpora: Wikipedia, OpenSubtitle (Tiedemann, 2012), Gutenberg. Corpora are filtered and separated into sentences. Successive sentences are then concatenated within the limit of 1,024 tokens per document. Training procedure ------------------ We pre-trained the model on a TPU v2-8 using the amazing Google Colab inter-server. Eval results ------------ We packaged GPT-fr with a dedicated language model evaluation benchmark. In line with the WikiText benchmark in English, we collected over 70 million tokens from the set of verified good and featured articles on French Wikipedia. The model reaches a zero-shot perplexity of 109.2 on the test set. ### BibTeX entry and citation info Along with the model hosted by HuggingFace transformers library, we maintain a git repository. If you use GPT-fr for your scientific publications or your industrial applications, please cite the following paper: ### References > > Jörg Tiedemann: Parallel Data, Tools and Interfaces in OPUS. LREC 2012: 2214-2218
[ "#### How to use\n\n\nThe model might be used through the astonishing 'Transformers' librairie:", "#### Limitations and bias\n\n\nLarge language models tend to replicate the biases found in pre-training datasets, such as gender discrimination or offensive content generation.\n\n\nTo limit exposition to too much explicit material, we carefully choose the sources beforehand. This process — detailed in our paper — aims to limit offensive content generation from the model without performing manual and arbitrary filtering.\n\n\nHowever, some societal biases, contained in the data, might be reflected by the model. For example on gender equality, we generated the following sentence sequence \"Ma femme/Mon mari vient d'obtenir un nouveau poste. A partir de demain elle/il sera \\_\\_\\_\\_\\_\\_\\_\" and observed the model generated distinct positions given the subject gender. We used top-k random sampling strategy with k=50 and stopped at the first punctuation element.\nThe positions generated for the wife is '*femme de ménage de la maison*' while the position for the husband is '*à la tête de la police*'. We do appreciate your feedback to better qualitatively and quantitatively assess such effects.\n\n\nTraining data\n-------------\n\n\nWe created a dedicated corpus to train our generative model. Indeed the model uses a fixed-length context size of 1,024 and require long documents to be trained. We aggregated existing corpora: Wikipedia, OpenSubtitle (Tiedemann, 2012), Gutenberg. Corpora are filtered and separated into sentences. Successive sentences are then concatenated within the limit of 1,024 tokens per document.\n\n\nTraining procedure\n------------------\n\n\nWe pre-trained the model on a TPU v2-8 using the amazing Google Colab inter-server.\n\n\nEval results\n------------\n\n\nWe packaged GPT-fr with a dedicated language model evaluation benchmark.\nIn line with the WikiText benchmark in English, we collected over 70 million tokens from the set of verified good and featured articles on French Wikipedia. The model reaches a zero-shot perplexity of 109.2 on the test set.", "### BibTeX entry and citation info\n\n\nAlong with the model hosted by HuggingFace transformers library, we maintain a git repository.\nIf you use GPT-fr for your scientific publications or your industrial applications, please cite the following paper:", "### References\n\n\n\n> \n> Jörg Tiedemann: Parallel Data, Tools and Interfaces in OPUS. LREC 2012: 2214-2218" ]
[ "TAGS\n#transformers #pytorch #tf #jax #gpt2 #text-generation #fr #license-apache-2.0 #model-index #endpoints_compatible #has_space #text-generation-inference #region-us \n", "#### How to use\n\n\nThe model might be used through the astonishing 'Transformers' librairie:", "#### Limitations and bias\n\n\nLarge language models tend to replicate the biases found in pre-training datasets, such as gender discrimination or offensive content generation.\n\n\nTo limit exposition to too much explicit material, we carefully choose the sources beforehand. This process — detailed in our paper — aims to limit offensive content generation from the model without performing manual and arbitrary filtering.\n\n\nHowever, some societal biases, contained in the data, might be reflected by the model. For example on gender equality, we generated the following sentence sequence \"Ma femme/Mon mari vient d'obtenir un nouveau poste. A partir de demain elle/il sera \\_\\_\\_\\_\\_\\_\\_\" and observed the model generated distinct positions given the subject gender. We used top-k random sampling strategy with k=50 and stopped at the first punctuation element.\nThe positions generated for the wife is '*femme de ménage de la maison*' while the position for the husband is '*à la tête de la police*'. We do appreciate your feedback to better qualitatively and quantitatively assess such effects.\n\n\nTraining data\n-------------\n\n\nWe created a dedicated corpus to train our generative model. Indeed the model uses a fixed-length context size of 1,024 and require long documents to be trained. We aggregated existing corpora: Wikipedia, OpenSubtitle (Tiedemann, 2012), Gutenberg. Corpora are filtered and separated into sentences. Successive sentences are then concatenated within the limit of 1,024 tokens per document.\n\n\nTraining procedure\n------------------\n\n\nWe pre-trained the model on a TPU v2-8 using the amazing Google Colab inter-server.\n\n\nEval results\n------------\n\n\nWe packaged GPT-fr with a dedicated language model evaluation benchmark.\nIn line with the WikiText benchmark in English, we collected over 70 million tokens from the set of verified good and featured articles on French Wikipedia. The model reaches a zero-shot perplexity of 109.2 on the test set.", "### BibTeX entry and citation info\n\n\nAlong with the model hosted by HuggingFace transformers library, we maintain a git repository.\nIf you use GPT-fr for your scientific publications or your industrial applications, please cite the following paper:", "### References\n\n\n\n> \n> Jörg Tiedemann: Parallel Data, Tools and Interfaces in OPUS. LREC 2012: 2214-2218" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-timit-demo This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4847 - Wer: 0.3462 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.487 | 4.0 | 500 | 1.3466 | 1.0153 | | 0.6134 | 8.0 | 1000 | 0.4807 | 0.4538 | | 0.2214 | 12.0 | 1500 | 0.4684 | 0.3984 | | 0.1233 | 16.0 | 2000 | 0.5070 | 0.3779 | | 0.0847 | 20.0 | 2500 | 0.4965 | 0.3705 | | 0.0611 | 24.0 | 3000 | 0.4881 | 0.3535 | | 0.0464 | 28.0 | 3500 | 0.4847 | 0.3462 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-timit-demo", "results": []}]}
asini/wav2vec2-timit-demo
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-timit-demo =================== This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.4847 * Wer: 0.3462 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 32 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 1000 * num\_epochs: 30 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.2+cu102 * Datasets 1.18.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
text-classification
transformers
# BERT-Large-Uncased for Sentiment Analysis This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) originally released in ["BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"](https://arxiv.org/abs/1810.04805) and trained on the [Stanford Sentiment Treebank v2 (SST2)](https://nlp.stanford.edu/sentiment/); part of the [General Language Understanding Evaluation (GLUE)](https://gluebenchmark.com) benchmark. This model was fine-tuned by the team at [AssemblyAI](https://www.assemblyai.com) and is released with the [corresponding blog post](). ## Usage To download and utilize this model for sentiment analysis please execute the following: ```python import torch.nn.functional as F from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("assemblyai/bert-large-uncased-sst2") model = AutoModelForSequenceClassification.from_pretrained("assemblyai/bert-large-uncased-sst2") tokenized_segments = tokenizer(["AssemblyAI is the best speech-to-text API for modern developers with performance being second to none!"], return_tensors="pt", padding=True, truncation=True) tokenized_segments_input_ids, tokenized_segments_attention_mask = tokenized_segments.input_ids, tokenized_segments.attention_mask model_predictions = F.softmax(model(input_ids=tokenized_segments_input_ids, attention_mask=tokenized_segments_attention_mask)['logits'], dim=1) print("Positive probability: "+str(model_predictions[0][1].item()*100)+"%") print("Negative probability: "+str(model_predictions[0][0].item()*100)+"%") ``` For questions about how to use this model feel free to contact the team at [AssemblyAI](https://www.assemblyai.com)!
{}
assemblyai/bert-large-uncased-sst2
null
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:1810.04805", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1810.04805" ]
[]
TAGS #transformers #pytorch #bert #text-classification #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #region-us
# BERT-Large-Uncased for Sentiment Analysis This model is a fine-tuned version of bert-large-uncased originally released in "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding" and trained on the Stanford Sentiment Treebank v2 (SST2); part of the General Language Understanding Evaluation (GLUE) benchmark. This model was fine-tuned by the team at AssemblyAI and is released with the [corresponding blog post](). ## Usage To download and utilize this model for sentiment analysis please execute the following: For questions about how to use this model feel free to contact the team at AssemblyAI!
[ "# BERT-Large-Uncased for Sentiment Analysis\nThis model is a fine-tuned version of bert-large-uncased originally released in \"BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding\" and trained on the Stanford Sentiment Treebank v2 (SST2); part of the General Language Understanding Evaluation (GLUE) benchmark. This model was fine-tuned by the team at AssemblyAI and is released with the [corresponding blog post]().", "## Usage\nTo download and utilize this model for sentiment analysis please execute the following:\n\n\nFor questions about how to use this model feel free to contact the team at AssemblyAI!" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #region-us \n", "# BERT-Large-Uncased for Sentiment Analysis\nThis model is a fine-tuned version of bert-large-uncased originally released in \"BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding\" and trained on the Stanford Sentiment Treebank v2 (SST2); part of the General Language Understanding Evaluation (GLUE) benchmark. This model was fine-tuned by the team at AssemblyAI and is released with the [corresponding blog post]().", "## Usage\nTo download and utilize this model for sentiment analysis please execute the following:\n\n\nFor questions about how to use this model feel free to contact the team at AssemblyAI!" ]
text-classification
transformers
# DistilBERT-Base-Uncased for Duplicate Question Detection This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) originally released in ["DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter"](https://arxiv.org/abs/1910.01108) and trained on the [Quora Question Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) dataset; part of the [General Language Understanding Evaluation (GLUE)](https://gluebenchmark.com) benchmark. This model was fine-tuned by the team at [AssemblyAI](https://www.assemblyai.com) and is released with the [corresponding blog post](). ## Usage To download and utilize this model for duplicate question detection please execute the following: ```python import torch.nn.functional as F from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("assemblyai/distilbert-base-uncased-qqp") model = AutoModelForSequenceClassification.from_pretrained("assemblyai/distilbert-base-uncased-qqp") tokenized_segments = tokenizer(["How many hours does it take to fly from California to New York?"], ["What is the flight time from New York to Seattle?"], return_tensors="pt", padding=True, truncation=True) tokenized_segments_input_ids, tokenized_segments_attention_mask = tokenized_segments.input_ids, tokenized_segments.attention_mask model_predictions = F.softmax(model(input_ids=tokenized_segments_input_ids, attention_mask=tokenized_segments_attention_mask)['logits'], dim=1) print("Duplicate probability: "+str(model_predictions[0][1].item()*100)+"%") print("Non-duplicate probability: "+str(model_predictions[0][0].item()*100)+"%") ``` For questions about how to use this model feel free to contact the team at [AssemblyAI](https://www.assemblyai.com)!
{}
assemblyai/distilbert-base-uncased-qqp
null
[ "transformers", "pytorch", "distilbert", "text-classification", "arxiv:1910.01108", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1910.01108" ]
[]
TAGS #transformers #pytorch #distilbert #text-classification #arxiv-1910.01108 #autotrain_compatible #endpoints_compatible #region-us
# DistilBERT-Base-Uncased for Duplicate Question Detection This model is a fine-tuned version of distilbert-base-uncased originally released in "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter" and trained on the Quora Question Pairs dataset; part of the General Language Understanding Evaluation (GLUE) benchmark. This model was fine-tuned by the team at AssemblyAI and is released with the [corresponding blog post](). ## Usage To download and utilize this model for duplicate question detection please execute the following: For questions about how to use this model feel free to contact the team at AssemblyAI!
[ "# DistilBERT-Base-Uncased for Duplicate Question Detection\nThis model is a fine-tuned version of distilbert-base-uncased originally released in \"DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter\" and trained on the Quora Question Pairs dataset; part of the General Language Understanding Evaluation (GLUE) benchmark. This model was fine-tuned by the team at AssemblyAI and is released with the [corresponding blog post]().", "## Usage\nTo download and utilize this model for duplicate question detection please execute the following:\n\n\nFor questions about how to use this model feel free to contact the team at AssemblyAI!" ]
[ "TAGS\n#transformers #pytorch #distilbert #text-classification #arxiv-1910.01108 #autotrain_compatible #endpoints_compatible #region-us \n", "# DistilBERT-Base-Uncased for Duplicate Question Detection\nThis model is a fine-tuned version of distilbert-base-uncased originally released in \"DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter\" and trained on the Quora Question Pairs dataset; part of the General Language Understanding Evaluation (GLUE) benchmark. This model was fine-tuned by the team at AssemblyAI and is released with the [corresponding blog post]().", "## Usage\nTo download and utilize this model for duplicate question detection please execute the following:\n\n\nFor questions about how to use this model feel free to contact the team at AssemblyAI!" ]
text-classification
transformers
# DistilBERT-Base-Uncased for Sentiment Analysis This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) originally released in ["DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter"](https://arxiv.org/abs/1910.01108) and trained on the [Stanford Sentiment Treebank v2 (SST2)](https://nlp.stanford.edu/sentiment/); part of the [General Language Understanding Evaluation (GLUE)](https://gluebenchmark.com) benchmark. This model was fine-tuned by the team at [AssemblyAI](https://www.assemblyai.com) and is released with the [corresponding blog post](). ## Usage To download and utilize this model for sentiment analysis please execute the following: ```python import torch.nn.functional as F from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("assemblyai/distilbert-base-uncased-sst2") model = AutoModelForSequenceClassification.from_pretrained("assemblyai/distilbert-base-uncased-sst2") tokenized_segments = tokenizer(["AssemblyAI is the best speech-to-text API for modern developers with performance being second to none!"], return_tensors="pt", padding=True, truncation=True) tokenized_segments_input_ids, tokenized_segments_attention_mask = tokenized_segments.input_ids, tokenized_segments.attention_mask model_predictions = F.softmax(model(input_ids=tokenized_segments_input_ids, attention_mask=tokenized_segments_attention_mask)['logits'], dim=1) print("Positive probability: "+str(model_predictions[0][1].item()*100)+"%") print("Negative probability: "+str(model_predictions[0][0].item()*100)+"%") ``` For questions about how to use this model feel free to contact the team at [AssemblyAI](https://www.assemblyai.com)!
{}
assemblyai/distilbert-base-uncased-sst2
null
[ "transformers", "pytorch", "distilbert", "text-classification", "arxiv:1910.01108", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1910.01108" ]
[]
TAGS #transformers #pytorch #distilbert #text-classification #arxiv-1910.01108 #autotrain_compatible #endpoints_compatible #has_space #region-us
# DistilBERT-Base-Uncased for Sentiment Analysis This model is a fine-tuned version of distilbert-base-uncased originally released in "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter" and trained on the Stanford Sentiment Treebank v2 (SST2); part of the General Language Understanding Evaluation (GLUE) benchmark. This model was fine-tuned by the team at AssemblyAI and is released with the [corresponding blog post](). ## Usage To download and utilize this model for sentiment analysis please execute the following: For questions about how to use this model feel free to contact the team at AssemblyAI!
[ "# DistilBERT-Base-Uncased for Sentiment Analysis\nThis model is a fine-tuned version of distilbert-base-uncased originally released in \"DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter\" and trained on the Stanford Sentiment Treebank v2 (SST2); part of the General Language Understanding Evaluation (GLUE) benchmark. This model was fine-tuned by the team at AssemblyAI and is released with the [corresponding blog post]().", "## Usage\nTo download and utilize this model for sentiment analysis please execute the following:\n\n\nFor questions about how to use this model feel free to contact the team at AssemblyAI!" ]
[ "TAGS\n#transformers #pytorch #distilbert #text-classification #arxiv-1910.01108 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# DistilBERT-Base-Uncased for Sentiment Analysis\nThis model is a fine-tuned version of distilbert-base-uncased originally released in \"DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter\" and trained on the Stanford Sentiment Treebank v2 (SST2); part of the General Language Understanding Evaluation (GLUE) benchmark. This model was fine-tuned by the team at AssemblyAI and is released with the [corresponding blog post]().", "## Usage\nTo download and utilize this model for sentiment analysis please execute the following:\n\n\nFor questions about how to use this model feel free to contact the team at AssemblyAI!" ]
text-classification
transformers
# Description This model takes a tweet with the word "jew" in it, and determines if it's antisemitic. Training data: This model was trained on 4k tweets, where ~50% were labeled as antisemitic. I labeled them myself based on personal experience and knowledge about common antisemitic tropes. Note: The goal for this model is not to be used as a final say on what is or is not antisemitic, but rather as a first pass on what might be antisemitic and should be reviewed by human experts. Please keep in mind that I'm not an expert on antisemitism or hatespeech. Whether something is antisemitic or not depends on the context, as for any hate speech, and everyone has a different definition for what is hate speech. If you would like to collaborate on antisemitism detection, please feel free to contact me at [email protected] This model is not ready for production, it needs more evaluation and more training data. # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 21194454 - CO2 Emissions (in grams): 2.0686690092905224 - Dataset: https://huggingface.co/datasets/astarostap/autonlp-data-antisemitism-2 ## Validation Metrics - Loss: 0.5291365385055542 - Accuracy: 0.7572692793931732 - Precision: 0.7126948775055679 - Recall: 0.835509138381201 - AUC: 0.8185826549941126 - F1: 0.7692307692307693 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/astarostap/autonlp-antisemitism-2-21194454 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("astarostap/autonlp-antisemitism-2-21194454", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("astarostap/autonlp-antisemitism-2-21194454", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
{"language": "en", "tags": "autonlp", "datasets": ["astarostap/autonlp-data-antisemitism-2"], "widget": [{"text": "the jews have a lot of power"}], "co2_eq_emissions": 2.0686690092905224}
astarostap/autonlp-antisemitism-2-21194454
null
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "en", "dataset:astarostap/autonlp-data-antisemitism-2", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #bert #text-classification #autonlp #en #dataset-astarostap/autonlp-data-antisemitism-2 #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
# Description This model takes a tweet with the word "jew" in it, and determines if it's antisemitic. Training data: This model was trained on 4k tweets, where ~50% were labeled as antisemitic. I labeled them myself based on personal experience and knowledge about common antisemitic tropes. Note: The goal for this model is not to be used as a final say on what is or is not antisemitic, but rather as a first pass on what might be antisemitic and should be reviewed by human experts. Please keep in mind that I'm not an expert on antisemitism or hatespeech. Whether something is antisemitic or not depends on the context, as for any hate speech, and everyone has a different definition for what is hate speech. If you would like to collaborate on antisemitism detection, please feel free to contact me at starosta@URL This model is not ready for production, it needs more evaluation and more training data. # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 21194454 - CO2 Emissions (in grams): 2.0686690092905224 - Dataset: URL ## Validation Metrics - Loss: 0.5291365385055542 - Accuracy: 0.7572692793931732 - Precision: 0.7126948775055679 - Recall: 0.835509138381201 - AUC: 0.8185826549941126 - F1: 0.7692307692307693 ## Usage You can use cURL to access this model: Or Python API:
[ "# Description\n\nThis model takes a tweet with the word \"jew\" in it, and determines if it's antisemitic.\n\nTraining data:\n\nThis model was trained on 4k tweets, where ~50% were labeled as antisemitic.\n\nI labeled them myself based on personal experience and knowledge about common antisemitic tropes.\n\nNote:\n\nThe goal for this model is not to be used as a final say on what is or is not antisemitic, but rather as a first pass on what might be antisemitic and should be reviewed by human experts.\n\nPlease keep in mind that I'm not an expert on antisemitism or hatespeech.\n\nWhether something is antisemitic or not depends on the context, as for any hate speech, and everyone has a different definition for what is hate speech.\n\nIf you would like to collaborate on antisemitism detection, please feel free to contact me at starosta@URL\n\nThis model is not ready for production, it needs more evaluation and more training data.", "# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 21194454\n- CO2 Emissions (in grams): 2.0686690092905224\n- Dataset: URL", "## Validation Metrics\n\n- Loss: 0.5291365385055542\n- Accuracy: 0.7572692793931732\n- Precision: 0.7126948775055679\n- Recall: 0.835509138381201\n- AUC: 0.8185826549941126\n- F1: 0.7692307692307693", "## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-astarostap/autonlp-data-antisemitism-2 #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n", "# Description\n\nThis model takes a tweet with the word \"jew\" in it, and determines if it's antisemitic.\n\nTraining data:\n\nThis model was trained on 4k tweets, where ~50% were labeled as antisemitic.\n\nI labeled them myself based on personal experience and knowledge about common antisemitic tropes.\n\nNote:\n\nThe goal for this model is not to be used as a final say on what is or is not antisemitic, but rather as a first pass on what might be antisemitic and should be reviewed by human experts.\n\nPlease keep in mind that I'm not an expert on antisemitism or hatespeech.\n\nWhether something is antisemitic or not depends on the context, as for any hate speech, and everyone has a different definition for what is hate speech.\n\nIf you would like to collaborate on antisemitism detection, please feel free to contact me at starosta@URL\n\nThis model is not ready for production, it needs more evaluation and more training data.", "# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 21194454\n- CO2 Emissions (in grams): 2.0686690092905224\n- Dataset: URL", "## Validation Metrics\n\n- Loss: 0.5291365385055542\n- Accuracy: 0.7572692793931732\n- Precision: 0.7126948775055679\n- Recall: 0.835509138381201\n- AUC: 0.8185826549941126\n- F1: 0.7692307692307693", "## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:" ]
text-classification
transformers
This model takes a tweet with the word "jew" in it, and determines if it's antisemitic. *Training data:* This model was trained on 4k tweets, where ~50% were labeled as antisemitic. I labeled them myself based on personal experience and knowledge about common antisemitic tropes. *Note:* The goal for this model is not to be used as a final say on what is or is not antisemitic, but rather as a first pass on what might be antisemitic and should be reviewed by human experts. Please keep in mind that I'm not an expert on antisemitism or hatespeech. Whether something is antisemitic or not depends on the context, as for any hate speech, and everyone has a different definition for what is hate speech. If you would like to collaborate on antisemitism detection, please feel free to contact me at [email protected] This model is not ready for production, it needs more evaluation and more training data.
{"license": "mit", "widget": [{"text": "Jews run the world."}]}
astarostap/distilbert-cased-antisemitic-tweets
null
[ "transformers", "pytorch", "distilbert", "text-classification", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #distilbert #text-classification #license-mit #autotrain_compatible #endpoints_compatible #region-us
This model takes a tweet with the word "jew" in it, and determines if it's antisemitic. *Training data:* This model was trained on 4k tweets, where ~50% were labeled as antisemitic. I labeled them myself based on personal experience and knowledge about common antisemitic tropes. *Note:* The goal for this model is not to be used as a final say on what is or is not antisemitic, but rather as a first pass on what might be antisemitic and should be reviewed by human experts. Please keep in mind that I'm not an expert on antisemitism or hatespeech. Whether something is antisemitic or not depends on the context, as for any hate speech, and everyone has a different definition for what is hate speech. If you would like to collaborate on antisemitism detection, please feel free to contact me at starosta@URL This model is not ready for production, it needs more evaluation and more training data.
[]
[ "TAGS\n#transformers #pytorch #distilbert #text-classification #license-mit #autotrain_compatible #endpoints_compatible #region-us \n" ]
text2text-generation
transformers
# friendly_JA-Model (T5 fine-tuned model) MT model trained using the friendly_JA Corpus attempting to make Japanese easier/more accessible to occidental people by using the Latin/English derived katakana lexicon instead of the standard Sino-Japanese lexicon # Examples | input | output| |---|---| |最適化を応用した機械翻訳モデルは高精度だ|オプティマイゼーションを応用したマシントランスレーションモデルは高いアキュラシーだ| |彼は架空の世界に住んでいる|彼はイマジナリー世界に住んでいる| |新型コロナウイルスに感染してしまった|コロナウイルスにかかってしまった| |深層学習は難しい|ディープラーニングはむずかしい| |新たな概念を紹介する|新しいコンセプトを紹介する| |津波の警報が流れた|ツナミのアラートが流れた| |南海トラフの災害は震源地による|南海トラフのディザスターはエピセンターによる| |息子は際どい内容の本を読んでしまった|子どもはセンシティブなコンテンツの本を読んでしまった| |彼女は非現金決済で払った|彼女はキャッシュレスで払った| |係員は会議の予定を調整している|担当の人はアジェンダを調整している| |友人とカラオケに行く予定があったが、彼女はどうしても美術館に行きたかった|友だちとカラオケに行くスケジュールがあったが、彼女はどうしてもミュージアムに行きたかった| |国際会議に参加しました|インターナショナルコンファレンスに参加しました| |部長は今日の会議に参加できかねました|部長は今日のミーティングに参加できませんでした。| |新型コロナウイルスの予防接種による心膜炎が多数報告されている|コロナウイルスのワクチンによるペリカーダイティスがレポートされている| |私はジョジョの奇妙な冒険が好き|私はジョジョのビザールアドベンチャーが好き| |新型コロナウイルスウイルス オミクロン株 1人死亡 8249人感染|コロナウイルス オミクロンバリアント 1人死んだ 8249人インフェクション| |2021年10月4日から岸田文雄は日本の総理大臣として勤めている|2021年10月4日から岸田文雄は日本のプライムミニスターとして働いている| # References t5 japanese pre-trained model: sonoisa t5-base-japanese (https://huggingface.co/sonoisa/t5-base-japanese) # License Shield: [![CC BY 4.0][cc-by-shield]][cc-by] This work is licensed under a [Creative Commons Attribution 4.0 International License][cc-by]. [![CC BY 4.0][cc-by-image]][cc-by] [cc-by]: http://creativecommons.org/licenses/by/4.0/ [cc-by-image]: https://i.creativecommons.org/l/by/4.0/88x31.png [cc-by-shield]: https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svg
{"language": ["ja"], "license": "cc-by-4.0", "tags": ["japanese", "easy-japanese", "friendly-japanese", "sino-japanese", "katakana"], "datasets": ["astremo/friendly_JA_corpus"], "metrics": ["bleu"]}
astremo/friendly_JA
null
[ "transformers", "pytorch", "t5", "text2text-generation", "japanese", "easy-japanese", "friendly-japanese", "sino-japanese", "katakana", "ja", "dataset:astremo/friendly_JA_corpus", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ja" ]
TAGS #transformers #pytorch #t5 #text2text-generation #japanese #easy-japanese #friendly-japanese #sino-japanese #katakana #ja #dataset-astremo/friendly_JA_corpus #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
friendly\_JA-Model (T5 fine-tuned model) ======================================== MT model trained using the friendly\_JA Corpus attempting to make Japanese easier/more accessible to occidental people by using the Latin/English derived katakana lexicon instead of the standard Sino-Japanese lexicon Examples ======== References ========== t5 japanese pre-trained model: sonoisa t5-base-japanese (URL License ======= Shield: [![CC BY 4.0](URL)](URL) This work is licensed under a [Creative Commons Attribution 4.0 International License](URL). [![CC BY 4.0](https://i.URL)](URL)
[]
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #japanese #easy-japanese #friendly-japanese #sino-japanese #katakana #ja #dataset-astremo/friendly_JA_corpus #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
#Harry Potter DialoGPT Model
{"tags": ["conversational"]}
astrobreazy/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#Harry Potter DialoGPT Model
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
null
null
git clone https://github.com/saic-mdal/lama.git
{}
asyou20/1234
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #region-us
git clone URL
[]
[ "TAGS\n#region-us \n" ]
null
transformers
# LayoutLM ## Model description LayoutLM is a simple but effective pre-training method of text and layout for document image understanding and information extraction tasks, such as form understanding and receipt understanding. LayoutLM archives the SOTA results on multiple datasets. For more details, please refer to our paper: [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou, [KDD 2020](https://www.kdd.org/kdd2020/accepted-papers) ## Training data We pre-train LayoutLM on IIT-CDIP Test Collection 1.0\* dataset with two settings. * LayoutLM-Base, Uncased (11M documents, 2 epochs): 12-layer, 768-hidden, 12-heads, 113M parameters **(This Model)** * LayoutLM-Large, Uncased (11M documents, 2 epochs): 24-layer, 1024-hidden, 16-heads, 343M parameters ## Citation If you find LayoutLM useful in your research, please cite the following paper: ``` latex @misc{xu2019layoutlm, title={LayoutLM: Pre-training of Text and Layout for Document Image Understanding}, author={Yiheng Xu and Minghao Li and Lei Cui and Shaohan Huang and Furu Wei and Ming Zhou}, year={2019}, eprint={1912.13318}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{}
atahmasb/tf-layoutlm-base-uncased
null
[ "transformers", "tf", "layoutlm", "arxiv:1912.13318", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1912.13318" ]
[]
TAGS #transformers #tf #layoutlm #arxiv-1912.13318 #endpoints_compatible #region-us
# LayoutLM ## Model description LayoutLM is a simple but effective pre-training method of text and layout for document image understanding and information extraction tasks, such as form understanding and receipt understanding. LayoutLM archives the SOTA results on multiple datasets. For more details, please refer to our paper: LayoutLM: Pre-training of Text and Layout for Document Image Understanding Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou, KDD 2020 ## Training data We pre-train LayoutLM on IIT-CDIP Test Collection 1.0\* dataset with two settings. * LayoutLM-Base, Uncased (11M documents, 2 epochs): 12-layer, 768-hidden, 12-heads, 113M parameters (This Model) * LayoutLM-Large, Uncased (11M documents, 2 epochs): 24-layer, 1024-hidden, 16-heads, 343M parameters If you find LayoutLM useful in your research, please cite the following paper:
[ "# LayoutLM", "## Model description\n\nLayoutLM is a simple but effective pre-training method of text and layout for document image understanding and information extraction tasks, such as form understanding and receipt understanding. LayoutLM archives the SOTA results on multiple datasets. For more details, please refer to our paper: \n\nLayoutLM: Pre-training of Text and Layout for Document Image Understanding\nYiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou, KDD 2020", "## Training data\n\nWe pre-train LayoutLM on IIT-CDIP Test Collection 1.0\\* dataset with two settings. \n\n* LayoutLM-Base, Uncased (11M documents, 2 epochs): 12-layer, 768-hidden, 12-heads, 113M parameters (This Model)\n* LayoutLM-Large, Uncased (11M documents, 2 epochs): 24-layer, 1024-hidden, 16-heads, 343M parameters\n\nIf you find LayoutLM useful in your research, please cite the following paper:" ]
[ "TAGS\n#transformers #tf #layoutlm #arxiv-1912.13318 #endpoints_compatible #region-us \n", "# LayoutLM", "## Model description\n\nLayoutLM is a simple but effective pre-training method of text and layout for document image understanding and information extraction tasks, such as form understanding and receipt understanding. LayoutLM archives the SOTA results on multiple datasets. For more details, please refer to our paper: \n\nLayoutLM: Pre-training of Text and Layout for Document Image Understanding\nYiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou, KDD 2020", "## Training data\n\nWe pre-train LayoutLM on IIT-CDIP Test Collection 1.0\\* dataset with two settings. \n\n* LayoutLM-Base, Uncased (11M documents, 2 epochs): 12-layer, 768-hidden, 12-heads, 113M parameters (This Model)\n* LayoutLM-Large, Uncased (11M documents, 2 epochs): 24-layer, 1024-hidden, 16-heads, 343M parameters\n\nIf you find LayoutLM useful in your research, please cite the following paper:" ]
null
transformers
# LayoutLM ## Model description LayoutLM is a simple but effective pre-training method of text and layout for document image understanding and information extraction tasks, such as form understanding and receipt understanding. LayoutLM archives the SOTA results on multiple datasets. For more details, please refer to our paper: [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou, [KDD 2020](https://www.kdd.org/kdd2020/accepted-papers) ## Training data We pre-train LayoutLM on IIT-CDIP Test Collection 1.0\* dataset with two settings. * LayoutLM-Base, Uncased (11M documents, 2 epochs): 12-layer, 768-hidden, 12-heads, 113M parameters * LayoutLM-Large, Uncased (11M documents, 2 epochs): 24-layer, 1024-hidden, 16-heads, 343M parameters **(This Model)** ## Citation If you find LayoutLM useful in your research, please cite the following paper: ``` latex @misc{xu2019layoutlm, title={LayoutLM: Pre-training of Text and Layout for Document Image Understanding}, author={Yiheng Xu and Minghao Li and Lei Cui and Shaohan Huang and Furu Wei and Ming Zhou}, year={2019}, eprint={1912.13318}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{}
atahmasb/tf-layoutlm-large-uncased
null
[ "transformers", "tf", "layoutlm", "arxiv:1912.13318", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1912.13318" ]
[]
TAGS #transformers #tf #layoutlm #arxiv-1912.13318 #endpoints_compatible #region-us
# LayoutLM ## Model description LayoutLM is a simple but effective pre-training method of text and layout for document image understanding and information extraction tasks, such as form understanding and receipt understanding. LayoutLM archives the SOTA results on multiple datasets. For more details, please refer to our paper: LayoutLM: Pre-training of Text and Layout for Document Image Understanding Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou, KDD 2020 ## Training data We pre-train LayoutLM on IIT-CDIP Test Collection 1.0\* dataset with two settings. * LayoutLM-Base, Uncased (11M documents, 2 epochs): 12-layer, 768-hidden, 12-heads, 113M parameters * LayoutLM-Large, Uncased (11M documents, 2 epochs): 24-layer, 1024-hidden, 16-heads, 343M parameters (This Model) If you find LayoutLM useful in your research, please cite the following paper:
[ "# LayoutLM", "## Model description\n\nLayoutLM is a simple but effective pre-training method of text and layout for document image understanding and information extraction tasks, such as form understanding and receipt understanding. LayoutLM archives the SOTA results on multiple datasets. For more details, please refer to our paper: \n\nLayoutLM: Pre-training of Text and Layout for Document Image Understanding\nYiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou, KDD 2020", "## Training data\n\nWe pre-train LayoutLM on IIT-CDIP Test Collection 1.0\\* dataset with two settings. \n\n* LayoutLM-Base, Uncased (11M documents, 2 epochs): 12-layer, 768-hidden, 12-heads, 113M parameters \n* LayoutLM-Large, Uncased (11M documents, 2 epochs): 24-layer, 1024-hidden, 16-heads, 343M parameters (This Model)\n\nIf you find LayoutLM useful in your research, please cite the following paper:" ]
[ "TAGS\n#transformers #tf #layoutlm #arxiv-1912.13318 #endpoints_compatible #region-us \n", "# LayoutLM", "## Model description\n\nLayoutLM is a simple but effective pre-training method of text and layout for document image understanding and information extraction tasks, such as form understanding and receipt understanding. LayoutLM archives the SOTA results on multiple datasets. For more details, please refer to our paper: \n\nLayoutLM: Pre-training of Text and Layout for Document Image Understanding\nYiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou, KDD 2020", "## Training data\n\nWe pre-train LayoutLM on IIT-CDIP Test Collection 1.0\\* dataset with two settings. \n\n* LayoutLM-Base, Uncased (11M documents, 2 epochs): 12-layer, 768-hidden, 12-heads, 113M parameters \n* LayoutLM-Large, Uncased (11M documents, 2 epochs): 24-layer, 1024-hidden, 16-heads, 343M parameters (This Model)\n\nIf you find LayoutLM useful in your research, please cite the following paper:" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8508 - Matthews Correlation: 0.5452 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5221 | 1.0 | 535 | 0.5370 | 0.4246 | | 0.3462 | 2.0 | 1070 | 0.5157 | 0.5183 | | 0.2332 | 3.0 | 1605 | 0.6324 | 0.5166 | | 0.1661 | 4.0 | 2140 | 0.7616 | 0.5370 | | 0.1263 | 5.0 | 2675 | 0.8508 | 0.5452 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5451837431775948, "name": "Matthews Correlation"}]}]}]}
athar/distilbert-base-uncased-finetuned-cola
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-cola ====================================== This model is a fine-tuned version of distilbert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.8508 * Matthews Correlation: 0.5452 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.9.0+cu111 * Datasets 1.13.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.13.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.13.0\n* Tokenizers 0.10.3" ]
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
atkh6673/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Harry Potter DialoGPT Model
[ "# Harry Potter DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Harry Potter DialoGPT Model" ]
text-generation
transformers
# Trump DialoGPT Model
{"tags": ["conversational"]}
atkh6673/DialoGPT-small-trump
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Trump DialoGPT Model
[ "# Trump DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Trump DialoGPT Model" ]
text-generation
transformers
# Dumbledore DialoGPT Model
{"tags": ["conversational"]}
atomsspawn/DialoGPT-small-dumbledore
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Dumbledore DialoGPT Model
[ "# Dumbledore DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Dumbledore DialoGPT Model" ]
null
transformers
# AraELECTRA <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/AraELECTRA.png" width="100" align="left"/> **ELECTRA** is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). AraELECTRA achieves state-of-the-art results on Arabic QA dataset. For a detailed description, please refer to the AraELECTRA paper [AraELECTRA: Pre-Training Text Discriminators for Arabic Language Understanding](https://arxiv.org/abs/2012.15516). ## How to use the discriminator in `transformers` ```python from transformers import ElectraForPreTraining, ElectraTokenizerFast import torch discriminator = ElectraForPreTraining.from_pretrained("aubmindlab/araelectra-base-discriminator") tokenizer = ElectraTokenizerFast.from_pretrained("aubmindlab/araelectra-base-discriminator") sentence = "" fake_sentence = "" fake_tokens = tokenizer.tokenize(fake_sentence) fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") discriminator_outputs = discriminator(fake_inputs) predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) [print("%7s" % token, end="") for token in fake_tokens] [print("%7s" % int(prediction), end="") for prediction in predictions.tolist()] ``` # Model Model | HuggingFace Model Name | Size (MB/Params)| ---|:---:|:---: AraELECTRA-base-generator | [araelectra-base-generator](https://huggingface.co/aubmindlab/araelectra-base-generator) | 227MB/60M | AraELECTRA-base-discriminator | [araelectra-base-discriminator](https://huggingface.co/aubmindlab/araelectra-base-discriminator) | 516MB/135M | # Compute Model | Hardware | num of examples (seq len = 512) | Batch Size | Num of Steps | Time (in days) ---|:---:|:---:|:---:|:---:|:---: AraELECTRA-base | TPUv3-8 | - | 256 | 2M | 24 # Dataset The pretraining data used for the new **AraELECTRA** model is also used for **AraGPT2 and AraBERTv2**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for giving us the data # Preprocessing It is recommended to apply our preprocessing function before training/testing on any dataset. **Install the arabert python package to segment text for AraBERT v1 & v2 or to clean your data `pip install arabert`** ```python from arabert.preprocess import ArabertPreprocessor model_name="araelectra-base" arabert_prep = ArabertPreprocessor(model_name=model_name) text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري" arabert_prep.preprocess(text) >>> output: ولن نبالغ إذا قلنا : إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري ``` # TensorFlow 1.x models **You can find the PyTorch, TF2 and TF1 models in HuggingFace's Transformer Library under the ```aubmindlab``` username** - `wget https://huggingface.co/aubmindlab/MODEL_NAME/resolve/main/tf1_model.tar.gz` where `MODEL_NAME` is any model under the `aubmindlab` name # If you used this model please cite us as : ``` @inproceedings{antoun-etal-2021-araelectra, title = "{A}ra{ELECTRA}: Pre-Training Text Discriminators for {A}rabic Language Understanding", author = "Antoun, Wissam and Baly, Fady and Hajj, Hazem", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Virtual)", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.wanlp-1.20", pages = "191--195", } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "datasets": ["wikipedia", "Osian", "1.5B-Arabic-Corpus", "oscar-arabic-unshuffled", "Assafir(private)"]}
aubmindlab/araelectra-base-discriminator
null
[ "transformers", "pytorch", "tf", "tensorboard", "electra", "pretraining", "ar", "arxiv:1406.2661", "arxiv:2012.15516", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1406.2661", "2012.15516" ]
[ "ar" ]
TAGS #transformers #pytorch #tf #tensorboard #electra #pretraining #ar #arxiv-1406.2661 #arxiv-2012.15516 #endpoints_compatible #has_space #region-us
AraELECTRA ========== <img src="URL width="100" align="left"/> ELECTRA is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a GAN. AraELECTRA achieves state-of-the-art results on Arabic QA dataset. For a detailed description, please refer to the AraELECTRA paper AraELECTRA: Pre-Training Text Discriminators for Arabic Language Understanding. How to use the discriminator in 'transformers' ---------------------------------------------- Model ===== Compute ======= Dataset ======= The pretraining data used for the new AraELECTRA model is also used for AraGPT2 and AraBERTv2. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: * OSCAR unshuffled and filtered. * Arabic Wikipedia dump from 2020/09/01 * The 1.5B words Arabic Corpus * The OSIAN Corpus * Assafir news articles. Huge thank you for Assafir for giving us the data Preprocessing ============= It is recommended to apply our preprocessing function before training/testing on any dataset. Install the arabert python package to segment text for AraBERT v1 & v2 or to clean your data 'pip install arabert' TensorFlow 1.x models ===================== You can find the PyTorch, TF2 and TF1 models in HuggingFace's Transformer Library under the username * 'wget URL where 'MODEL\_NAME' is any model under the 'aubmindlab' name If you used this model please cite us as : ========================================== Acknowledgments =============== Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the AUB MIND Lab Members for the continous support. Also thanks to Yakshof and Assafir for data and storage access. Another thanks for Habib Rahal (URL for putting a face to AraBERT. Contacts ======== Wissam Antoun: Linkedin | Twitter | Github | [wfa07@URL](mailto:wfa07@URL) | [URL@URL](mailto:URL@URL) Fady Baly: Linkedin | Twitter | Github | [fgb06@URL](mailto:fgb06@URL) | [URL@URL](mailto:URL@URL)
[]
[ "TAGS\n#transformers #pytorch #tf #tensorboard #electra #pretraining #ar #arxiv-1406.2661 #arxiv-2012.15516 #endpoints_compatible #has_space #region-us \n" ]
fill-mask
transformers
# AraELECTRA <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/AraELECTRA.png" width="100" align="left"/> **ELECTRA** is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). AraELECTRA achieves state-of-the-art results on Arabic QA dataset. For a detailed description, please refer to the AraELECTRA paper [AraELECTRA: Pre-Training Text Discriminators for Arabic Language Understanding](https://arxiv.org/abs/2012.15516). ## How to use the generator in `transformers` ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="aubmindlab/araelectra-base-generator", tokenizer="aubmindlab/araelectra-base-generator" ) print( fill_mask(" عاصمة لبنان هي [MASK] .) ) ``` # Preprocessing It is recommended to apply our preprocessing function before training/testing on any dataset. **Install the arabert python package to segment text for AraBERT v1 & v2 or to clean your data `pip install arabert`** ```python from arabert.preprocess import ArabertPreprocessor model_name="aubmindlab/araelectra-base" arabert_prep = ArabertPreprocessor(model_name=model_name) text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري" arabert_prep.preprocess(text) >>> output: ولن نبالغ إذا قلنا : إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري ``` # Model Model | HuggingFace Model Name | Size (MB/Params)| ---|:---:|:---: AraELECTRA-base-generator | [araelectra-base-generator](https://huggingface.co/aubmindlab/araelectra-base-generator) | 227MB/60M | AraELECTRA-base-discriminator | [araelectra-base-discriminator](https://huggingface.co/aubmindlab/araelectra-base-discriminator) | 516MB/135M | # Compute Model | Hardware | num of examples (seq len = 512) | Batch Size | Num of Steps | Time (in days) ---|:---:|:---:|:---:|:---:|:---: AraELECTRA-base | TPUv3-8 | - | 256 | 2M | 24 # Dataset The pretraining data used for the new AraELECTRA model is also used for **AraGPT2 and AraELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for giving us the data # TensorFlow 1.x models **You can find the PyTorch, TF2 and TF1 models in HuggingFace's Transformer Library under the ```aubmindlab``` username** - `wget https://huggingface.co/aubmindlab/MODEL_NAME/resolve/main/tf1_model.tar.gz` where `MODEL_NAME` is any model under the `aubmindlab` name # If you used this model please cite us as : ``` @inproceedings{antoun-etal-2021-araelectra, title = "{A}ra{ELECTRA}: Pre-Training Text Discriminators for {A}rabic Language Understanding", author = "Antoun, Wissam and Baly, Fady and Hajj, Hazem", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Virtual)", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.wanlp-1.20", pages = "191--195", } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "datasets": ["wikipedia", "Osian", "1.5B-Arabic-Corpus", "oscar-arabic-unshuffled", "Assafir(private)"], "widget": [{"text": " \u0639\u0627\u0635\u0645\u0629 \u0644\u0628\u0646\u0627\u0646 \u0647\u064a [MASK] ."}]}
aubmindlab/araelectra-base-generator
null
[ "transformers", "pytorch", "tf", "tensorboard", "safetensors", "electra", "fill-mask", "ar", "arxiv:1406.2661", "arxiv:2012.15516", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1406.2661", "2012.15516" ]
[ "ar" ]
TAGS #transformers #pytorch #tf #tensorboard #safetensors #electra #fill-mask #ar #arxiv-1406.2661 #arxiv-2012.15516 #autotrain_compatible #endpoints_compatible #has_space #region-us
AraELECTRA ========== <img src="URL width="100" align="left"/> ELECTRA is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a GAN. AraELECTRA achieves state-of-the-art results on Arabic QA dataset. For a detailed description, please refer to the AraELECTRA paper AraELECTRA: Pre-Training Text Discriminators for Arabic Language Understanding. How to use the generator in 'transformers' ------------------------------------------ Preprocessing ============= It is recommended to apply our preprocessing function before training/testing on any dataset. Install the arabert python package to segment text for AraBERT v1 & v2 or to clean your data 'pip install arabert' Model ===== Compute ======= Dataset ======= The pretraining data used for the new AraELECTRA model is also used for AraGPT2 and AraELECTRA. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: * OSCAR unshuffled and filtered. * Arabic Wikipedia dump from 2020/09/01 * The 1.5B words Arabic Corpus * The OSIAN Corpus * Assafir news articles. Huge thank you for Assafir for giving us the data TensorFlow 1.x models ===================== You can find the PyTorch, TF2 and TF1 models in HuggingFace's Transformer Library under the username * 'wget URL where 'MODEL\_NAME' is any model under the 'aubmindlab' name If you used this model please cite us as : ========================================== Acknowledgments =============== Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the AUB MIND Lab Members for the continous support. Also thanks to Yakshof and Assafir for data and storage access. Another thanks for Habib Rahal (URL for putting a face to AraBERT. Contacts ======== Wissam Antoun: Linkedin | Twitter | Github | [wfa07@URL](mailto:wfa07@URL) | [URL@URL](mailto:URL@URL) Fady Baly: Linkedin | Twitter | Github | [fgb06@URL](mailto:fgb06@URL) | [URL@URL](mailto:URL@URL)
[]
[ "TAGS\n#transformers #pytorch #tf #tensorboard #safetensors #electra #fill-mask #ar #arxiv-1406.2661 #arxiv-2012.15516 #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
text-generation
transformers
# Arabic GPT2 <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/AraGPT2.png" width="100" align="left"/> You can find more information in our paper [AraGPT2](https://arxiv.org/abs/2012.15520) The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API. GPT2-base and medium uses the code from the `gpt2` folder and can trains models from the [minimaxir/gpt-2-simple](https://github.com/minimaxir/gpt-2-simple) repository. These models were trained using the `lamb` optimizer and follow the same architecture as `gpt2` and are fully compatible with the `transformers` library. GPT2-large and GPT2-mega were trained using the [imcaspar/gpt2-ml](https://github.com/imcaspar/gpt2-ml/) library, and follow the `grover` architecture. You can use the pytorch classes found in `grover/modeling_gpt2.py` as a direct replacement for classes in the `transformers` library (it should support version `v4.x` from `transformers`). Both models are trained using the `adafactor` optimizer, since the `adam` and `lamb` optimizer use too much memory causing the model to not even fit 1 batch on a TPU core. AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2. # Usage ## Testing the model using `transformers`: ```python from transformers import GPT2TokenizerFast, pipeline #for base and medium from transformers import GPT2LMHeadModel #for large and mega # pip install arabert from arabert.aragpt2.grover.modeling_gpt2 import GPT2LMHeadModel from arabert.preprocess import ArabertPreprocessor MODEL_NAME='aubmindlab/aragpt2-base' arabert_prep = ArabertPreprocessor(model_name=MODEL_NAME) text="" text_clean = arabert_prep.preprocess(text) model = GPT2LMHeadModel.from_pretrained(MODEL_NAME) tokenizer = GPT2TokenizerFast.from_pretrained(MODEL_NAME) generation_pipeline = pipeline("text-generation",model=model,tokenizer=tokenizer) #feel free to try different decoding settings generation_pipeline(text, pad_token_id=tokenizer.eos_token_id, num_beams=10, max_length=200, top_p=0.9, repetition_penalty = 3.0, no_repeat_ngram_size = 3)[0]['generated_text'] ``` ## Finetunning using `transformers`: Follow the guide linked [here](https://towardsdatascience.com/fine-tuning-gpt2-on-colab-gpu-for-free-340468c92ed) ## Finetuning using our code with TF 1.15.4: Create the Training TFRecords: ```bash python create_pretraining_data.py --input_file=<RAW TEXT FILE with documents/article separated by an empty line> --output_file=<OUTPUT TFRecord> --tokenizer_dir=<Directory with the GPT2 Tokenizer files> ``` Finetuning: ```bash python3 run_pretraining.py \\r\n --input_file="gs://<GS_BUCKET>/pretraining_data/*" \\r\n --output_dir="gs://<GS_BUCKET>/pretraining_model/" \\r\n --config_file="config/small_hparams.json" \\r\n --batch_size=128 \\r\n --eval_batch_size=8 \\r\n --num_train_steps= \\r\n --num_warmup_steps= \\r\n --learning_rate= \\r\n --save_checkpoints_steps= \\r\n --max_seq_length=1024 \\r\n --max_eval_steps= \\r\n --optimizer="lamb" \\r\n --iterations_per_loop=5000 \\r\n --keep_checkpoint_max=10 \\r\n --use_tpu=True \\r\n --tpu_name=<TPU NAME> \\r\n --do_train=True \\r\n --do_eval=False ``` # Model Sizes Model | Optimizer | Context size | Embedding Size | Num of heads | Num of layers | Model Size / Num of Params | ---|:---:|:---:|:---:|:---:|:---:|:---: AraGPT2-base | `lamb` | 1024 | 768 | 12 | 12 | 527MB / 135M | AraGPT2-medium | `lamb` | 1024 | 1024 | 16 | 24 | 1.38G/370M | AraGPT2-large | `adafactor` | 1024 | 1280 | 20 | 36 | 2.98GB/792M | AraGPT2-mega | `adafactor` | 1024 | 1536 | 25 | 48 | 5.5GB/1.46B | All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats. ## Compute Model | Hardware | num of examples (seq len = 1024) | Batch Size | Num of Steps | Time (in days) ---|:---:|:---:|:---:|:---:|:---: AraGPT2-base | TPUv3-128 | 9.7M | 1792 | 125K | 1.5 AraGPT2-medium | TPUv3-8 | 9.7M | 1152 | 85K | 1.5 AraGPT2-large | TPUv3-128 | 9.7M | 256 | 220k | 3 AraGPT2-mega | TPUv3-128 | 9.7M | 256 | 780K | 9 # Dataset The pretraining data used for the new AraGPT2 model is also used for **AraBERTv2 and AraELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus after we thoroughly filter it, to the dataset used in AraBERTv1 but without the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for giving us the data # Disclaimer The text generated by AraGPT2 is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by AraGPT2 should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it. # If you used this model please cite us as : ``` @inproceedings{antoun-etal-2021-aragpt2, title = "{A}ra{GPT}2: Pre-Trained Transformer for {A}rabic Language Generation", author = "Antoun, Wissam and Baly, Fady and Hajj, Hazem", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Virtual)", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.wanlp-1.21", pages = "196--207", } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continuous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "datasets": ["wikipedia", "Osian", "1.5B-Arabic-Corpus", "oscar-arabic-unshuffled", "Assafir(private)"], "widget": [{"text": "\u064a\u062d\u0643\u0649 \u0623\u0646 \u0645\u0632\u0627\u0631\u0639\u0627 \u0645\u062e\u0627\u062f\u0639\u0627 \u0642\u0627\u0645 \u0628\u0628\u064a\u0639 \u0628\u0626\u0631 \u0627\u0644\u0645\u0627\u0621 \u0627\u0644\u0645\u0648\u062c\u0648\u062f \u0641\u064a \u0623\u0631\u0636\u0647 \u0644\u062c\u0627\u0631\u0647 \u0645\u0642\u0627\u0628\u0644 \u0645\u0628\u0644\u063a \u0643\u0628\u064a\u0631 \u0645\u0646 \u0627\u0644\u0645\u0627\u0644"}, {"text": "\u0627\u0644\u0642\u062f\u0633 \u0645\u062f\u064a\u0646\u0629 \u062a\u0627\u0631\u064a\u062e\u064a\u0629\u060c \u0628\u0646\u0627\u0647\u0627 \u0627\u0644\u0643\u0646\u0639\u0627\u0646\u064a\u0648\u0646 \u0641\u064a"}, {"text": "\u0643\u0627\u0646 \u064a\u0627 \u0645\u0627 \u0643\u0627\u0646 \u0641\u064a \u0642\u062f\u064a\u0645 \u0627\u0644\u0632\u0645\u0627\u0646"}]}
aubmindlab/aragpt2-base
null
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "safetensors", "gpt2", "text-generation", "ar", "arxiv:2012.15520", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2012.15520" ]
[ "ar" ]
TAGS #transformers #pytorch #tf #jax #tensorboard #safetensors #gpt2 #text-generation #ar #arxiv-2012.15520 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
Arabic GPT2 =========== <img src="URL width="100" align="left"/> You can find more information in our paper AraGPT2 The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API. GPT2-base and medium uses the code from the 'gpt2' folder and can trains models from the minimaxir/gpt-2-simple repository. These models were trained using the 'lamb' optimizer and follow the same architecture as 'gpt2' and are fully compatible with the 'transformers' library. GPT2-large and GPT2-mega were trained using the imcaspar/gpt2-ml library, and follow the 'grover' architecture. You can use the pytorch classes found in 'grover/modeling\_gpt2.py' as a direct replacement for classes in the 'transformers' library (it should support version 'v4.x' from 'transformers'). Both models are trained using the 'adafactor' optimizer, since the 'adam' and 'lamb' optimizer use too much memory causing the model to not even fit 1 batch on a TPU core. AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2. Usage ===== Testing the model using 'transformers': --------------------------------------- Finetunning using 'transformers': --------------------------------- Follow the guide linked here Finetuning using our code with TF 1.15.4: ----------------------------------------- Create the Training TFRecords: Finetuning: Model Sizes =========== All models are available in the 'HuggingFace' model page under the aubmindlab name. Checkpoints are available in PyTorch, TF2 and TF1 formats. Compute ------- Dataset ======= The pretraining data used for the new AraGPT2 model is also used for AraBERTv2 and AraELECTRA. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus after we thoroughly filter it, to the dataset used in AraBERTv1 but without the websites that we previously crawled: * OSCAR unshuffled and filtered. * Arabic Wikipedia dump from 2020/09/01 * The 1.5B words Arabic Corpus * The OSIAN Corpus * Assafir news articles. Huge thank you for Assafir for giving us the data Disclaimer ========== The text generated by AraGPT2 is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by AraGPT2 should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it. If you used this model please cite us as : ========================================== Acknowledgments =============== Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the AUB MIND Lab Members for the continuous support. Also thanks to Yakshof and Assafir for data and storage access. Another thanks for Habib Rahal (URL for putting a face to AraBERT. Contacts ======== Wissam Antoun: Linkedin | Twitter | Github | [wfa07@URL](mailto:wfa07@URL) | [URL@URL](mailto:URL@URL) Fady Baly: Linkedin | Twitter | Github | [fgb06@URL](mailto:fgb06@URL) | [URL@URL](mailto:URL@URL)
[]
[ "TAGS\n#transformers #pytorch #tf #jax #tensorboard #safetensors #gpt2 #text-generation #ar #arxiv-2012.15520 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n" ]
text-generation
transformers
# Arabic GPT2 <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/AraGPT2.png" width="100" align="left"/> You can find more information in our paper [AraGPT2](https://arxiv.org/abs/2012.15520) The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API. GPT2-base and medium uses the code from the `gpt2` folder and can trains models from the [minimaxir/gpt-2-simple](https://github.com/minimaxir/gpt-2-simple) repository. These models were trained using the `lamb` optimizer and follow the same architecture as `gpt2` and are fully compatible with the `transformers` library. GPT2-large and GPT2-mega were trained using the [imcaspar/gpt2-ml](https://github.com/imcaspar/gpt2-ml/) library, and follow the `grover` architecture. You can use the pytorch classes found in `grover/modeling_gpt2.py` as a direct replacement for classes in the `transformers` library (it should support version `v4.x` from `transformers`). Both models are trained using the `adafactor` optimizer, since the `adam` and `lamb` optimizer use too much memory causing the model to not even fit 1 batch on a TPU core. AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2. # Usage ## Testing the model using `transformers`: ```python from transformers import GPT2TokenizerFast, pipeline #for base and medium from transformers import GPT2LMHeadModel #for large and mega # pip install arabert from arabert.aragpt2.grover.modeling_gpt2 import GPT2LMHeadModel from arabert.preprocess import ArabertPreprocessor MODEL_NAME='aubmindlab/aragpt2-large' arabert_prep = ArabertPreprocessor(model_name=MODEL_NAME) text="" text_clean = arabert_prep.preprocess(text) model = GPT2LMHeadModel.from_pretrained(MODEL_NAME) tokenizer = GPT2TokenizerFast.from_pretrained(MODEL_NAME) generation_pipeline = pipeline("text-generation",model=model,tokenizer=tokenizer) #feel free to try different decoding settings generation_pipeline(text, pad_token_id=tokenizer.eos_token_id, num_beams=10, max_length=200, top_p=0.9, repetition_penalty = 3.0, no_repeat_ngram_size = 3)[0]['generated_text'] >>> ``` ## Finetunning using `transformers`: Follow the guide linked [here](https://towardsdatascience.com/fine-tuning-gpt2-on-colab-gpu-for-free-340468c92ed) ## Finetuning using our code with TF 1.15.4: Create the Training TFRecords: ```bash python create_pretraining_data.py --input_file=<RAW TEXT FILE with documents/article separated by an empty line> --output_file=<OUTPUT TFRecord> --tokenizer_dir=<Directory with the GPT2 Tokenizer files> ``` Finetuning: ```bash python3 run_pretraining.py \\\r\n --input_file="gs://<GS_BUCKET>/pretraining_data/*" \\\r\n --output_dir="gs://<GS_BUCKET>/pretraining_model/" \\\r\n --config_file="config/small_hparams.json" \\\r\n --batch_size=128 \\\r\n --eval_batch_size=8 \\\r\n --num_train_steps= \\\r\n --num_warmup_steps= \\\r\n --learning_rate= \\\r\n --save_checkpoints_steps= \\\r\n --max_seq_length=1024 \\\r\n --max_eval_steps= \\\r\n --optimizer="lamb" \\\r\n --iterations_per_loop=5000 \\\r\n --keep_checkpoint_max=10 \\\r\n --use_tpu=True \\\r\n --tpu_name=<TPU NAME> \\\r\n --do_train=True \\\r\n --do_eval=False ``` # Model Sizes Model | Optimizer | Context size | Embedding Size | Num of heads | Num of layers | Model Size / Num of Params | ---|:---:|:---:|:---:|:---:|:---:|:---: AraGPT2-base | `lamb` | 1024 | 768 | 12 | 12 | 527MB/135M | AraGPT2-medium | `lamb` | 1024 | 1024 | 16 | 24 |1.38G/370M | AraGPT2-large | `adafactor` | 1024 | 1280 | 20 | 36 | 2.98GB/792M | AraGPT2-mega | `adafactor` | 1024 | 1536 | 25 | 48 | 5.5GB/1.46B | All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats. ## Compute For Dataset Source see the [Dataset Section](#Dataset) Model | Hardware | num of examples (seq len = 1024) | Batch Size | Num of Steps | Time (in days) ---|:---:|:---:|:---:|:---:|:---: AraGPT2-base | TPUv3-128 | 9.7M | 1792 | 125K | 1.5 AraGPT2-medium | TPUv3-8 | 9.7M | 1152 | 85K | 1.5 AraGPT2-large | TPUv3-128 | 9.7M | 256 | 220k | 3 AraGPT2-mega | TPUv3-128 | 9.7M | 256 | 780K | 9 # Dataset The pretraining data used for the new AraBERT model is also used for **GPT2 and ELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for giving us the data # Disclaimer The text generated by GPT2 Arabic is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by GPT2 Arabic should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it. # If you used this model please cite us as : ``` @inproceedings{antoun-etal-2021-aragpt2, title = "{A}ra{GPT}2: Pre-Trained Transformer for {A}rabic Language Generation", author = "Antoun, Wissam and Baly, Fady and Hajj, Hazem", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Virtual)", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.wanlp-1.21", pages = "196--207", } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continuous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "datasets": ["wikipedia", "Osian", "1.5B-Arabic-Corpus", "oscar-arabic-unshuffled", "Assafir(private)"], "inference": false, "widget": [{"text": "\u064a\u062d\u0643\u0649 \u0623\u0646 \u0645\u0632\u0627\u0631\u0639\u0627 \u0645\u062e\u0627\u062f\u0639\u0627 \u0642\u0627\u0645 \u0628\u0628\u064a\u0639 \u0628\u0626\u0631 \u0627\u0644\u0645\u0627\u0621 \u0627\u0644\u0645\u0648\u062c\u0648\u062f \u0641\u064a \u0623\u0631\u0636\u0647 \u0644\u062c\u0627\u0631\u0647 \u0645\u0642\u0627\u0628\u0644 \u0645\u0628\u0644\u063a \u0643\u0628\u064a\u0631 \u0645\u0646 \u0627\u0644\u0645\u0627\u0644"}, {"text": "\u0627\u0644\u0642\u062f\u0633 \u0645\u062f\u064a\u0646\u0629 \u062a\u0627\u0631\u064a\u062e\u064a\u0629\u060c \u0628\u0646\u0627\u0647\u0627 \u0627\u0644\u0643\u0646\u0639\u0627\u0646\u064a\u0648\u0646 \u0641\u064a"}, {"text": "\u0643\u0627\u0646 \u064a\u0627 \u0645\u0627 \u0643\u0627\u0646 \u0641\u064a \u0642\u062f\u064a\u0645 \u0627\u0644\u0632\u0645\u0627\u0646"}]}
aubmindlab/aragpt2-large
null
[ "transformers", "pytorch", "jax", "tensorboard", "safetensors", "gpt2", "text-generation", "ar", "arxiv:2012.15520", "autotrain_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2012.15520" ]
[ "ar" ]
TAGS #transformers #pytorch #jax #tensorboard #safetensors #gpt2 #text-generation #ar #arxiv-2012.15520 #autotrain_compatible #has_space #text-generation-inference #region-us
Arabic GPT2 =========== <img src="URL width="100" align="left"/> You can find more information in our paper AraGPT2 The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API. GPT2-base and medium uses the code from the 'gpt2' folder and can trains models from the minimaxir/gpt-2-simple repository. These models were trained using the 'lamb' optimizer and follow the same architecture as 'gpt2' and are fully compatible with the 'transformers' library. GPT2-large and GPT2-mega were trained using the imcaspar/gpt2-ml library, and follow the 'grover' architecture. You can use the pytorch classes found in 'grover/modeling\_gpt2.py' as a direct replacement for classes in the 'transformers' library (it should support version 'v4.x' from 'transformers'). Both models are trained using the 'adafactor' optimizer, since the 'adam' and 'lamb' optimizer use too much memory causing the model to not even fit 1 batch on a TPU core. AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2. Usage ===== Testing the model using 'transformers': --------------------------------------- Finetunning using 'transformers': --------------------------------- Follow the guide linked here Finetuning using our code with TF 1.15.4: ----------------------------------------- Create the Training TFRecords: Finetuning: Model Sizes =========== All models are available in the 'HuggingFace' model page under the aubmindlab name. Checkpoints are available in PyTorch, TF2 and TF1 formats. Compute ------- For Dataset Source see the Dataset Section Dataset ======= The pretraining data used for the new AraBERT model is also used for GPT2 and ELECTRA. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: * OSCAR unshuffled and filtered. * Arabic Wikipedia dump from 2020/09/01 * The 1.5B words Arabic Corpus * The OSIAN Corpus * Assafir news articles. Huge thank you for Assafir for giving us the data Disclaimer ========== The text generated by GPT2 Arabic is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by GPT2 Arabic should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it. If you used this model please cite us as : ========================================== Acknowledgments =============== Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the AUB MIND Lab Members for the continuous support. Also thanks to Yakshof and Assafir for data and storage access. Another thanks for Habib Rahal (URL for putting a face to AraBERT. Contacts ======== Wissam Antoun: Linkedin | Twitter | Github | [wfa07@URL](mailto:wfa07@URL) | [URL@URL](mailto:URL@URL) Fady Baly: Linkedin | Twitter | Github | [fgb06@URL](mailto:fgb06@URL) | [URL@URL](mailto:URL@URL)
[]
[ "TAGS\n#transformers #pytorch #jax #tensorboard #safetensors #gpt2 #text-generation #ar #arxiv-2012.15520 #autotrain_compatible #has_space #text-generation-inference #region-us \n" ]
text-generation
transformers
# Arabic GPT2 <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/AraGPT2.png" width="100" align="left"/> You can find more information in our paper [AraGPT2](https://arxiv.org/abs/2012.15520) The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API. GPT2-base and medium uses the code from the `gpt2` folder and can trains models from the [minimaxir/gpt-2-simple](https://github.com/minimaxir/gpt-2-simple) repository. These models were trained using the `lamb` optimizer and follow the same architecture as `gpt2` and are fully compatible with the `transformers` library. GPT2-large and GPT2-mega were trained using the [imcaspar/gpt2-ml](https://github.com/imcaspar/gpt2-ml/) library, and follow the `grover` architecture. You can use the pytorch classes found in `grover/modeling_gpt2.py` as a direct replacement for classes in the `transformers` library (it should support version `v4.x` from `transformers`). Both models are trained using the `adafactor` optimizer, since the `adam` and `lamb` optimizer use too much memory causing the model to not even fit 1 batch on a TPU core. AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2. # Usage ## Testing the model using `transformers`: ```python from transformers import GPT2TokenizerFast, pipeline #for base and medium from transformers import GPT2LMHeadModel #for large and mega # pip install arabert from arabert.aragpt2.grover.modeling_gpt2 import GPT2LMHeadModel from arabert.preprocess import ArabertPreprocessor MODEL_NAME='aubmindlab/aragpt2-medium' arabert_prep = ArabertPreprocessor(model_name=MODEL_NAME) text="" text_clean = arabert_prep.preprocess(text) model = GPT2LMHeadModel.from_pretrained(MODEL_NAME) tokenizer = GPT2TokenizerFast.from_pretrained(MODEL_NAME) generation_pipeline = pipeline("text-generation",model=model,tokenizer=tokenizer) #feel free to try different decoding settings generation_pipeline(text, pad_token_id=tokenizer.eos_token_id, num_beams=10, max_length=200, top_p=0.9, repetition_penalty = 3.0, no_repeat_ngram_size = 3)[0]['generated_text'] ``` ## Finetunning using `transformers`: Follow the guide linked [here](https://towardsdatascience.com/fine-tuning-gpt2-on-colab-gpu-for-free-340468c92ed) ## Finetuning using our code with TF 1.15.4: Create the Training TFRecords: ```bash python create_pretraining_data.py --input_file=<RAW TEXT FILE with documents/article separated by an empty line> --output_file=<OUTPUT TFRecord> --tokenizer_dir=<Directory with the GPT2 Tokenizer files> ``` Finetuning: ```bash python3 run_pretraining.py \\\n --input_file="gs://<GS_BUCKET>/pretraining_data/*" \\\n --output_dir="gs://<GS_BUCKET>/pretraining_model/" \\\n --config_file="config/small_hparams.json" \\\n --batch_size=128 \\\n --eval_batch_size=8 \\\n --num_train_steps= \\\n --num_warmup_steps= \\\n --learning_rate= \\\n --save_checkpoints_steps= \\\n --max_seq_length=1024 \\\n --max_eval_steps= \\\n --optimizer="lamb" \\\n --iterations_per_loop=5000 \\\n --keep_checkpoint_max=10 \\\n --use_tpu=True \\\n --tpu_name=<TPU NAME> \\\n --do_train=True \\\n --do_eval=False ``` # Model Sizes Model | Optimizer | Context size | Embedding Size | Num of heads | Num of layers | Model Size / Num of Params | ---|:---:|:---:|:---:|:---:|:---:|:---: AraGPT2-base | `lamb` | 1024 | 768 | 12 | 12 | 527MB / 135M | AraGPT2-medium | `lamb` | 1024 | 1024 | 16 | 24 | 1.38G/370M | AraGPT2-large | `adafactor` | 1024 | 1280 | 20 | 36 | 2.98GB/792M | AraGPT2-mega | `adafactor` | 1024 | 1536 | 25 | 48 | 5.5GB/1.46B | All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats. ## Compute Model | Hardware | num of examples (seq len = 1024) | Batch Size | Num of Steps | Time (in days) ---|:---:|:---:|:---:|:---:|:---: AraGPT2-base | TPUv3-128 | 9.7M | 1792 | 125K | 1.5 AraGPT2-medium | TPUv3-8 | 9.7M | 80 | 1M | 15 AraGPT2-large | TPUv3-128 | 9.7M | 256 | 220k | 3 AraGPT2-mega | TPUv3-128 | 9.7M | 256 | 780K | 9 # Dataset The pretraining data used for the new AraGPT2 model is also used for **AraBERTv2 and AraELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the dataset used in AraBERTv1 but with out the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for giving us the data # Disclaimer The text generated by AraGPT2 is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by AraGPT2 should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it. # If you used this model please cite us as : ``` @inproceedings{antoun-etal-2021-aragpt2, title = "{A}ra{GPT}2: Pre-Trained Transformer for {A}rabic Language Generation", author = "Antoun, Wissam and Baly, Fady and Hajj, Hazem", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Virtual)", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.wanlp-1.21", pages = "196--207", } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continuous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "datasets": ["wikipedia", "Osian", "1.5B-Arabic-Corpus", "oscar-arabic-unshuffled", "Assafir(private)"], "widget": [{"text": "\u064a\u062d\u0643\u0649 \u0623\u0646 \u0645\u0632\u0627\u0631\u0639\u0627 \u0645\u062e\u0627\u062f\u0639\u0627 \u0642\u0627\u0645 \u0628\u0628\u064a\u0639 \u0628\u0626\u0631 \u0627\u0644\u0645\u0627\u0621 \u0627\u0644\u0645\u0648\u062c\u0648\u062f \u0641\u064a \u0623\u0631\u0636\u0647 \u0644\u062c\u0627\u0631\u0647 \u0645\u0642\u0627\u0628\u0644 \u0645\u0628\u0644\u063a \u0643\u0628\u064a\u0631 \u0645\u0646 \u0627\u0644\u0645\u0627\u0644"}, {"text": "\u0627\u0644\u0642\u062f\u0633 \u0645\u062f\u064a\u0646\u0629 \u062a\u0627\u0631\u064a\u062e\u064a\u0629\u060c \u0628\u0646\u0627\u0647\u0627 \u0627\u0644\u0643\u0646\u0639\u0627\u0646\u064a\u0648\u0646 \u0641\u064a"}, {"text": "\u0643\u0627\u0646 \u064a\u0627 \u0645\u0627 \u0643\u0627\u0646 \u0641\u064a \u0642\u062f\u064a\u0645 \u0627\u0644\u0632\u0645\u0627\u0646"}]}
aubmindlab/aragpt2-medium
null
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "safetensors", "gpt2", "text-generation", "ar", "arxiv:2012.15520", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2012.15520" ]
[ "ar" ]
TAGS #transformers #pytorch #tf #jax #tensorboard #safetensors #gpt2 #text-generation #ar #arxiv-2012.15520 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
Arabic GPT2 =========== <img src="URL width="100" align="left"/> You can find more information in our paper AraGPT2 The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API. GPT2-base and medium uses the code from the 'gpt2' folder and can trains models from the minimaxir/gpt-2-simple repository. These models were trained using the 'lamb' optimizer and follow the same architecture as 'gpt2' and are fully compatible with the 'transformers' library. GPT2-large and GPT2-mega were trained using the imcaspar/gpt2-ml library, and follow the 'grover' architecture. You can use the pytorch classes found in 'grover/modeling\_gpt2.py' as a direct replacement for classes in the 'transformers' library (it should support version 'v4.x' from 'transformers'). Both models are trained using the 'adafactor' optimizer, since the 'adam' and 'lamb' optimizer use too much memory causing the model to not even fit 1 batch on a TPU core. AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2. Usage ===== Testing the model using 'transformers': --------------------------------------- Finetunning using 'transformers': --------------------------------- Follow the guide linked here Finetuning using our code with TF 1.15.4: ----------------------------------------- Create the Training TFRecords: Finetuning: Model Sizes =========== All models are available in the 'HuggingFace' model page under the aubmindlab name. Checkpoints are available in PyTorch, TF2 and TF1 formats. Compute ------- Dataset ======= The pretraining data used for the new AraGPT2 model is also used for AraBERTv2 and AraELECTRA. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the dataset used in AraBERTv1 but with out the websites that we previously crawled: * OSCAR unshuffled and filtered. * Arabic Wikipedia dump from 2020/09/01 * The 1.5B words Arabic Corpus * The OSIAN Corpus * Assafir news articles. Huge thank you for Assafir for giving us the data Disclaimer ========== The text generated by AraGPT2 is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by AraGPT2 should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it. If you used this model please cite us as : ========================================== Acknowledgments =============== Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the AUB MIND Lab Members for the continuous support. Also thanks to Yakshof and Assafir for data and storage access. Another thanks for Habib Rahal (URL for putting a face to AraBERT. Contacts ======== Wissam Antoun: Linkedin | Twitter | Github | [wfa07@URL](mailto:wfa07@URL) | [URL@URL](mailto:URL@URL) Fady Baly: Linkedin | Twitter | Github | [fgb06@URL](mailto:fgb06@URL) | [URL@URL](mailto:URL@URL)
[]
[ "TAGS\n#transformers #pytorch #tf #jax #tensorboard #safetensors #gpt2 #text-generation #ar #arxiv-2012.15520 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n" ]
text-classification
transformers
# AraGPT2 Detector Machine generated detector model from the [AraGPT2: Pre-Trained Transformer for Arabic Language Generation paper](https://arxiv.org/abs/2012.15520) This model is trained on the long text passages, and achieves a 99.4% F1-Score. # How to use it: ```python from transformers import pipeline from arabert.preprocess import ArabertPreprocessor processor = ArabertPreprocessor(model="aubmindlab/araelectra-base-discriminator") pipe = pipeline("sentiment-analysis", model = "aubmindlab/aragpt2-mega-detector-long") text = " " text_prep = processor.preprocess(text) result = pipe(text_prep) # [{'label': 'machine-generated', 'score': 0.9977743625640869}] ``` # If you used this model please cite us as : ``` @misc{antoun2020aragpt2, title={AraGPT2: Pre-Trained Transformer for Arabic Language Generation}, author={Wissam Antoun and Fady Baly and Hazem Hajj}, year={2020}, eprint={2012.15520}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "widget": [{"text": "\u0648\u0625\u0630\u0627 \u0643\u0627\u0646 \u0647\u0646\u0627\u0643 \u0645\u0646 \u0644\u0627 \u064a\u0632\u0627\u0644 \u064a\u0639\u062a\u0642\u062f \u0623\u0646 \u0644\u0628\u0646\u0627\u0646 \u0647\u0648 \u0633\u0648\u064a\u0633\u0631\u0627 \u0627\u0644\u0634\u0631\u0642 \u060c \u0641\u0647\u0648 \u0645\u062e\u0637\u0626 \u0625\u0644\u0649 \u062d\u062f \u0628\u0639\u064a\u062f . \u0641\u0644\u0628\u0646\u0627\u0646 \u0644\u064a\u0633 \u0633\u0648\u064a\u0633\u0631\u0627 \u060c \u0648\u0644\u0627 \u064a\u0645\u0643\u0646 \u0623\u0646 \u064a\u0643\u0648\u0646 \u0643\u0630\u0644\u0643 . \u0644\u0642\u062f \u0639\u0627\u0634 \u0627\u0644\u0644\u0628\u0646\u0627\u0646\u064a\u0648\u0646 \u0641\u064a \u0647\u0630\u0627 \u0627\u0644\u0628\u0644\u062f \u0645\u0646\u0630 \u0645\u0627 \u064a\u0632\u064a\u062f \u0639\u0646 \u0623\u0644\u0641 \u0648\u062e\u0645\u0633\u0645\u0626\u0629 \u0639\u0627\u0645 \u060c \u0623\u064a \u0645\u0646\u0630 \u062a\u0623\u0633\u064a\u0633 \u0627\u0644\u0625\u0645\u0627\u0631\u0629 \u0627\u0644\u0634\u0647\u0627\u0628\u064a\u0629 \u0627\u0644\u062a\u064a \u0623\u0633\u0633\u0647\u0627 \u0627\u0644\u0623\u0645\u064a\u0631 \u0641\u062e\u0631 \u0627\u0644\u062f\u064a\u0646 \u0627\u0644\u0645\u0639\u0646\u064a \u0627\u0644\u062b\u0627\u0646\u064a ( 1697 - 1742 )"}]}
aubmindlab/aragpt2-mega-detector-long
null
[ "transformers", "pytorch", "safetensors", "electra", "text-classification", "ar", "arxiv:2012.15520", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2012.15520" ]
[ "ar" ]
TAGS #transformers #pytorch #safetensors #electra #text-classification #ar #arxiv-2012.15520 #autotrain_compatible #endpoints_compatible #has_space #region-us
# AraGPT2 Detector Machine generated detector model from the AraGPT2: Pre-Trained Transformer for Arabic Language Generation paper This model is trained on the long text passages, and achieves a 99.4% F1-Score. # How to use it: # If you used this model please cite us as : # Contacts Wissam Antoun: Linkedin | Twitter | Github | <wfa07@URL> | <URL@URL> Fady Baly: Linkedin | Twitter | Github | <fgb06@URL> | <URL@URL>
[ "# AraGPT2 Detector\n\nMachine generated detector model from the AraGPT2: Pre-Trained Transformer for Arabic Language Generation paper\n\nThis model is trained on the long text passages, and achieves a 99.4% F1-Score.", "# How to use it:", "# If you used this model please cite us as :", "# Contacts\nWissam Antoun: Linkedin | Twitter | Github | <wfa07@URL> | <URL@URL>\n\nFady Baly: Linkedin | Twitter | Github | <fgb06@URL> | <URL@URL>" ]
[ "TAGS\n#transformers #pytorch #safetensors #electra #text-classification #ar #arxiv-2012.15520 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# AraGPT2 Detector\n\nMachine generated detector model from the AraGPT2: Pre-Trained Transformer for Arabic Language Generation paper\n\nThis model is trained on the long text passages, and achieves a 99.4% F1-Score.", "# How to use it:", "# If you used this model please cite us as :", "# Contacts\nWissam Antoun: Linkedin | Twitter | Github | <wfa07@URL> | <URL@URL>\n\nFady Baly: Linkedin | Twitter | Github | <fgb06@URL> | <URL@URL>" ]
text-generation
transformers
# Arabic GPT2 <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/AraGPT2.png" width="100" align="left"/> You can find more information in our paper [AraGPT2](https://arxiv.org/abs/2012.15520) The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API. GPT2-base and medium uses the code from the `gpt2` folder and can trains models from the [minimaxir/gpt-2-simple](https://github.com/minimaxir/gpt-2-simple) repository. These models were trained using the `lamb` optimizer and follow the same architecture as `gpt2` and are fully compatible with the `transformers` library. GPT2-large and GPT2-mega were trained using the [imcaspar/gpt2-ml](https://github.com/imcaspar/gpt2-ml/) library, and follow the `grover` architecture. You can use the pytorch classes found in `grover/modeling_gpt2.py` as a direct replacement for classes in the `transformers` library (it should support version `v4.x` from `transformers`). Both models are trained using the `adafactor` optimizer, since the `adam` and `lamb` optimizer use too much memory causing the model to not even fit 1 batch on a TPU core. AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2. # Usage ## Testing the model using `transformers`: You need to use the GPT2LMHeadModel from `arabert`: `pip install arabert` ```python from transformers import GPT2TokenizerFast, pipeline #for base and medium from transformers import GPT2LMHeadModel #for large and mega from arabert.aragpt2.grover.modeling_gpt2 import GPT2LMHeadModel from arabert.preprocess import ArabertPreprocessor MODEL_NAME='aubmindlab/aragpt2-mega' arabert_prep = ArabertPreprocessor(model_name=MODEL_NAME) text="" text_clean = arabert_prep.preprocess(text) model = GPT2LMHeadModel.from_pretrained(MODEL_NAME) tokenizer = GPT2TokenizerFast.from_pretrained(MODEL_NAME) generation_pipeline = pipeline("text-generation",model=model,tokenizer=tokenizer) #feel free to try different decoding settings generation_pipeline(text, pad_token_id=tokenizer.eos_token_id, num_beams=10, max_length=200, top_p=0.9, repetition_penalty = 3.0, no_repeat_ngram_size = 3)[0]['generated_text'] >>> ``` ## Finetunning using `transformers`: Follow the guide linked [here](https://towardsdatascience.com/fine-tuning-gpt2-on-colab-gpu-for-free-340468c92ed) ## Finetuning using our code with TF 1.15.4: Create the Training TFRecords: ```bash python create_pretraining_data.py --input_file=<RAW TEXT FILE with documents/article separated by an empty line> --output_file=<OUTPUT TFRecord> --tokenizer_dir=<Directory with the GPT2 Tokenizer files> ``` Finetuning: ```bash python3 run_pretraining.py \\r\n --input_file="gs://<GS_BUCKET>/pretraining_data/*" \\r\n --output_dir="gs://<GS_BUCKET>/pretraining_model/" \\r\n --config_file="config/small_hparams.json" \\r\n --batch_size=128 \\r\n --eval_batch_size=8 \\r\n --num_train_steps= \\r\n --num_warmup_steps= \\r\n --learning_rate= \\r\n --save_checkpoints_steps= \\r\n --max_seq_length=1024 \\r\n --max_eval_steps= \\r\n --optimizer="lamb" \\r\n --iterations_per_loop=5000 \\r\n --keep_checkpoint_max=10 \\r\n --use_tpu=True \\r\n --tpu_name=<TPU NAME> \\r\n --do_train=True \\r\n --do_eval=False ``` # Model Sizes Model | Optimizer | Context size | Embedding Size | Num of heads | Num of layers | Model Size / Num of Params | ---|:---:|:---:|:---:|:---:|:---:|:---: AraGPT2-base | `lamb` | 1024 | 768 | 12 | 12 | 527MB/135M | AraGPT2-medium | `lamb` | 1024 | 1024 | 16 | 24 | 1.38G/370M | AraGPT2-large | `adafactor` | 1024 | 1280 | 20 | 36 | 2.98GB/792M | AraGPT2-mega | `adafactor` | 1024 | 1536 | 25 | 48 | 5.5GB/1.46B | All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats. ## Compute For Dataset Source see the [Dataset Section](#Dataset) Model | Hardware | num of examples (seq len = 1024) | Batch Size | Num of Steps | Time (in days) ---|:---:|:---:|:---:|:---:|:---: AraGPT2-base | TPUv3-128 | 9.7M | 1792 | 125K | 1.5 AraGPT2-medium | TPUv3-8 | 9.7M | 1152 | 85K | 1.5 AraGPT2-large | TPUv3-128 | 9.7M | 256 | 220k | 3 AraGPT2-mega | TPUv3-128 | 9.7M | 256 | 780K | 9 # Dataset The pretraining data used for the new AraBERT model is also used for **GPT2 and ELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for giving us the data # Disclaimer The text generated by GPT2 Arabic is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by GPT2 Arabic should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it. # If you used this model please cite us as : ``` @inproceedings{antoun-etal-2021-aragpt2, title = "{A}ra{GPT}2: Pre-Trained Transformer for {A}rabic Language Generation", author = "Antoun, Wissam and Baly, Fady and Hajj, Hazem", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Virtual)", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.wanlp-1.21", pages = "196--207", } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continuous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "license": "other", "datasets": ["wikipedia", "Osian", "1.5B-Arabic-Corpus", "oscar-arabic-unshuffled", "Assafir(private)"], "license_name": "custom", "license_link": "https://github.com/aub-mind/arabert/blob/master/aragpt2/LICENSE", "inference": false, "widget": [{"text": "\u064a\u062d\u0643\u0649 \u0623\u0646 \u0645\u0632\u0627\u0631\u0639\u0627 \u0645\u062e\u0627\u062f\u0639\u0627 \u0642\u0627\u0645 \u0628\u0628\u064a\u0639 \u0628\u0626\u0631 \u0627\u0644\u0645\u0627\u0621 \u0627\u0644\u0645\u0648\u062c\u0648\u062f \u0641\u064a \u0623\u0631\u0636\u0647 \u0644\u062c\u0627\u0631\u0647 \u0645\u0642\u0627\u0628\u0644 \u0645\u0628\u0644\u063a \u0643\u0628\u064a\u0631 \u0645\u0646 \u0627\u0644\u0645\u0627\u0644"}, {"text": "\u0627\u0644\u0642\u062f\u0633 \u0645\u062f\u064a\u0646\u0629 \u062a\u0627\u0631\u064a\u062e\u064a\u0629\u060c \u0628\u0646\u0627\u0647\u0627 \u0627\u0644\u0643\u0646\u0639\u0627\u0646\u064a\u0648\u0646 \u0641\u064a"}, {"text": "\u0643\u0627\u0646 \u064a\u0627 \u0645\u0627 \u0643\u0627\u0646 \u0641\u064a \u0642\u062f\u064a\u0645 \u0627\u0644\u0632\u0645\u0627\u0646"}]}
aubmindlab/aragpt2-mega
null
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "ar", "arxiv:2012.15520", "license:other", "autotrain_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2012.15520" ]
[ "ar" ]
TAGS #transformers #pytorch #tensorboard #gpt2 #text-generation #ar #arxiv-2012.15520 #license-other #autotrain_compatible #has_space #text-generation-inference #region-us
Arabic GPT2 =========== <img src="URL width="100" align="left"/> You can find more information in our paper AraGPT2 The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API. GPT2-base and medium uses the code from the 'gpt2' folder and can trains models from the minimaxir/gpt-2-simple repository. These models were trained using the 'lamb' optimizer and follow the same architecture as 'gpt2' and are fully compatible with the 'transformers' library. GPT2-large and GPT2-mega were trained using the imcaspar/gpt2-ml library, and follow the 'grover' architecture. You can use the pytorch classes found in 'grover/modeling\_gpt2.py' as a direct replacement for classes in the 'transformers' library (it should support version 'v4.x' from 'transformers'). Both models are trained using the 'adafactor' optimizer, since the 'adam' and 'lamb' optimizer use too much memory causing the model to not even fit 1 batch on a TPU core. AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2. Usage ===== Testing the model using 'transformers': --------------------------------------- You need to use the GPT2LMHeadModel from 'arabert': 'pip install arabert' Finetunning using 'transformers': --------------------------------- Follow the guide linked here Finetuning using our code with TF 1.15.4: ----------------------------------------- Create the Training TFRecords: Finetuning: Model Sizes =========== All models are available in the 'HuggingFace' model page under the aubmindlab name. Checkpoints are available in PyTorch, TF2 and TF1 formats. Compute ------- For Dataset Source see the Dataset Section Dataset ======= The pretraining data used for the new AraBERT model is also used for GPT2 and ELECTRA. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: * OSCAR unshuffled and filtered. * Arabic Wikipedia dump from 2020/09/01 * The 1.5B words Arabic Corpus * The OSIAN Corpus * Assafir news articles. Huge thank you for Assafir for giving us the data Disclaimer ========== The text generated by GPT2 Arabic is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by GPT2 Arabic should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it. If you used this model please cite us as : ========================================== Acknowledgments =============== Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the AUB MIND Lab Members for the continuous support. Also thanks to Yakshof and Assafir for data and storage access. Another thanks for Habib Rahal (URL for putting a face to AraBERT. Contacts ======== Wissam Antoun: Linkedin | Twitter | Github | [wfa07@URL](mailto:wfa07@URL) | [URL@URL](mailto:URL@URL) Fady Baly: Linkedin | Twitter | Github | [fgb06@URL](mailto:fgb06@URL) | [URL@URL](mailto:URL@URL)
[]
[ "TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #ar #arxiv-2012.15520 #license-other #autotrain_compatible #has_space #text-generation-inference #region-us \n" ]
fill-mask
transformers
# !!! A newer version of this model is available !!! [AraBERTv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) # AraBERT v1 & v2 : Pre-training BERT for Arabic Language Understanding <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/arabert_logo.png" width="100" align="left"/> **AraBERT** is an Arabic pretrained lanaguage model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERT uses the same BERT-Base config. More details are available in the [AraBERT Paper](https://arxiv.org/abs/2003.00104) and in the [AraBERT Meetup](https://github.com/WissamAntoun/pydata_khobar_meetup) There are two versions of the model, AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were splitted using the [Farasa Segmenter](http://alt.qcri.org/farasa/segmenter.html). We evalaute AraBERT models on different downstream tasks and compare them to [mBERT]((https://github.com/google-research/bert/blob/master/multilingual.md)), and other state of the art models (*To the extent of our knowledge*). The Tasks were Sentiment Analysis on 6 different datasets ([HARD](https://github.com/elnagara/HARD-Arabic-Dataset), [ASTD-Balanced](https://www.aclweb.org/anthology/D15-1299), [ArsenTD-Lev](https://staff.aub.edu.lb/~we07/Publications/ArSentD-LEV_Sentiment_Corpus.pdf), [LABR](https://github.com/mohamedadaly/LABR)), Named Entity Recognition with the [ANERcorp](http://curtis.ml.cmu.edu/w/courses/index.php/ANERcorp), and Arabic Question Answering on [Arabic-SQuAD and ARCD](https://github.com/husseinmozannar/SOQAL) # AraBERTv2 ## What's New! AraBERT now comes in 4 new variants to replace the old v1 versions: More Detail in the AraBERT folder and in the [README](https://github.com/aub-mind/arabert/blob/master/AraBERT/README.md) and in the [AraBERT Paper](https://arxiv.org/abs/2003.00104v2) Model | HuggingFace Model Name | Size (MB/Params)| Pre-Segmentation | DataSet (Sentences/Size/nWords) | ---|:---:|:---:|:---:|:---: AraBERTv0.2-base | [bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) | 543MB / 136M | No | 200M / 77GB / 8.6B | AraBERTv0.2-large| [bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) | 1.38G 371M | No | 200M / 77GB / 8.6B | AraBERTv2-base| [bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) | 543MB 136M | Yes | 200M / 77GB / 8.6B | AraBERTv2-large| [bert-large-arabertv2](https://huggingface.co/aubmindlab/bert-large-arabertv2) | 1.38G 371M | Yes | 200M / 77GB / 8.6B | AraBERTv0.1-base| [bert-base-arabertv01](https://huggingface.co/aubmindlab/bert-base-arabertv01) | 543MB 136M | No | 77M / 23GB / 2.7B | AraBERTv1-base| [bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) | 543MB 136M | Yes | 77M / 23GB / 2.7B | All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats. ## Better Pre-Processing and New Vocab We identified an issue with AraBERTv1's wordpiece vocabulary. The issue came from punctuations and numbers that were still attached to words when learned the wordpiece vocab. We now insert a space between numbers and characters and around punctuation characters. The new vocabulary was learnt using the `BertWordpieceTokenizer` from the `tokenizers` library, and should now support the Fast tokenizer implementation from the `transformers` library. **P.S.**: All the old BERT codes should work with the new BERT, just change the model name and check the new preprocessing dunction **Please read the section on how to use the [preprocessing function](#Preprocessing)** ## Bigger Dataset and More Compute We used ~3.5 times more data, and trained for longer. For Dataset Sources see the [Dataset Section](#Dataset) Model | Hardware | num of examples with seq len (128 / 512) |128 (Batch Size/ Num of Steps) | 512 (Batch Size/ Num of Steps) | Total Steps | Total Time (in Days) | ---|:---:|:---:|:---:|:---:|:---:|:---: AraBERTv0.2-base | TPUv3-8 | 420M / 207M |2560 / 1M | 384/ 2M | 3M | - AraBERTv0.2-large | TPUv3-128 | 420M / 207M | 13440 / 250K | 2056 / 300K | 550K | - AraBERTv2-base | TPUv3-8 | 520M / 245M |13440 / 250K | 2056 / 300K | 550K | - AraBERTv2-large | TPUv3-128 | 520M / 245M | 13440 / 250K | 2056 / 300K | 550K | - AraBERT-base (v1/v0.1) | TPUv2-8 | - |512 / 900K | 128 / 300K| 1.2M | 4 days # Dataset The pretraining data used for the new AraBERT model is also used for Arabic **GPT2 and ELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for giving us the data # Preprocessing It is recommended to apply our preprocessing function before training/testing on any dataset. **Install farasapy to segment text for AraBERT v1 & v2 `pip install farasapy`** ```python from arabert.preprocess import ArabertPreprocessor model_name="bert-base-arabert" arabert_prep = ArabertPreprocessor(model_name=model_name) text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري" arabert_prep.preprocess(text) >>>"و+ لن نبالغ إذا قل +نا إن هاتف أو كمبيوتر ال+ مكتب في زمن +نا هذا ضروري" ``` ## Accepted_models ``` bert-base-arabertv01 bert-base-arabert bert-base-arabertv02 bert-base-arabertv2 bert-large-arabertv02 bert-large-arabertv2 araelectra-base aragpt2-base aragpt2-medium aragpt2-large aragpt2-mega ``` # TensorFlow 1.x models The TF1.x model are available in the HuggingFace models repo. You can download them as follows: - via git-lfs: clone all the models in a repo ```bash curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash sudo apt-get install git-lfs git lfs install git clone https://huggingface.co/aubmindlab/MODEL_NAME tar -C ./MODEL_NAME -zxvf /content/MODEL_NAME/tf1_model.tar.gz ``` where `MODEL_NAME` is any model under the `aubmindlab` name - via `wget`: - Go to the tf1_model.tar.gz file on huggingface.co/models/aubmindlab/MODEL_NAME. - copy the `oid sha256` - then run `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/INSERT_THE_SHA_HERE` (ex: for `aragpt2-base`: `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/3766fc03d7c2593ff2fb991d275e96b81b0ecb2098b71ff315611d052ce65248`) # If you used this model please cite us as : Google Scholar has our Bibtex wrong (missing name), use this instead ``` @inproceedings{antoun2020arabert, title={AraBERT: Transformer-based Model for Arabic Language Understanding}, author={Antoun, Wissam and Baly, Fady and Hajj, Hazem}, booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020}, pages={9} } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. ## Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "datasets": ["wikipedia", "Osian", "1.5B-Arabic-Corpus", "oscar-arabic-unshuffled", "Assafir(private)"], "widget": [{"text": " \u0639\u0627\u0635\u0645 +\u0629 \u0644\u0628\u0646\u0627\u0646 \u0647\u064a [MASK] ."}]}
aubmindlab/bert-base-arabert
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "ar", "arxiv:2003.00104", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2003.00104" ]
[ "ar" ]
TAGS #transformers #pytorch #tf #jax #safetensors #bert #fill-mask #ar #arxiv-2003.00104 #autotrain_compatible #endpoints_compatible #has_space #region-us
!!! A newer version of this model is available !!! AraBERTv2 ============================================================ AraBERT v1 & v2 : Pre-training BERT for Arabic Language Understanding ===================================================================== <img src="URL width="100" align="left"/> AraBERT is an Arabic pretrained lanaguage model based on Google's BERT architechture. AraBERT uses the same BERT-Base config. More details are available in the AraBERT Paper and in the AraBERT Meetup There are two versions of the model, AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were splitted using the Farasa Segmenter. We evalaute AraBERT models on different downstream tasks and compare them to mBERT), and other state of the art models (*To the extent of our knowledge*). The Tasks were Sentiment Analysis on 6 different datasets (HARD, ASTD-Balanced, ArsenTD-Lev, LABR), Named Entity Recognition with the ANERcorp, and Arabic Question Answering on Arabic-SQuAD and ARCD AraBERTv2 ========= What's New! ----------- AraBERT now comes in 4 new variants to replace the old v1 versions: More Detail in the AraBERT folder and in the README and in the AraBERT Paper All models are available in the 'HuggingFace' model page under the aubmindlab name. Checkpoints are available in PyTorch, TF2 and TF1 formats. Better Pre-Processing and New Vocab ----------------------------------- We identified an issue with AraBERTv1's wordpiece vocabulary. The issue came from punctuations and numbers that were still attached to words when learned the wordpiece vocab. We now insert a space between numbers and characters and around punctuation characters. The new vocabulary was learnt using the 'BertWordpieceTokenizer' from the 'tokenizers' library, and should now support the Fast tokenizer implementation from the 'transformers' library. P.S.: All the old BERT codes should work with the new BERT, just change the model name and check the new preprocessing dunction Please read the section on how to use the preprocessing function Bigger Dataset and More Compute ------------------------------- We used ~3.5 times more data, and trained for longer. For Dataset Sources see the Dataset Section Dataset ======= The pretraining data used for the new AraBERT model is also used for Arabic GPT2 and ELECTRA. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: * OSCAR unshuffled and filtered. * Arabic Wikipedia dump from 2020/09/01 * The 1.5B words Arabic Corpus * The OSIAN Corpus * Assafir news articles. Huge thank you for Assafir for giving us the data Preprocessing ============= It is recommended to apply our preprocessing function before training/testing on any dataset. Install farasapy to segment text for AraBERT v1 & v2 'pip install farasapy' Accepted\_models ---------------- TensorFlow 1.x models ===================== The TF1.x model are available in the HuggingFace models repo. You can download them as follows: * via git-lfs: clone all the models in a repo where 'MODEL\_NAME' is any model under the 'aubmindlab' name * via 'wget': + Go to the tf1\_model.URL file on URL + copy the 'oid sha256' + then run 'wget URL (ex: for 'aragpt2-base': 'wget URL If you used this model please cite us as : ========================================== Google Scholar has our Bibtex wrong (missing name), use this instead Acknowledgments =============== Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the AUB MIND Lab Members for the continous support. Also thanks to Yakshof and Assafir for data and storage access. Another thanks for Habib Rahal (URL for putting a face to AraBERT. Contacts -------- Wissam Antoun: Linkedin | Twitter | Github | [wfa07@URL](mailto:wfa07@URL) | [URL@URL](mailto:URL@URL) Fady Baly: Linkedin | Twitter | Github | [fgb06@URL](mailto:fgb06@URL) | [URL@URL](mailto:URL@URL)
[]
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #ar #arxiv-2003.00104 #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
fill-mask
transformers
# !!! A newer version of this model is available !!! [AraBERTv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) # AraBERT v1 & v2 : Pre-training BERT for Arabic Language Understanding <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/arabert_logo.png" width="100" align="left"/> **AraBERT** is an Arabic pretrained lanaguage model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERT uses the same BERT-Base config. More details are available in the [AraBERT Paper](https://arxiv.org/abs/2003.00104) and in the [AraBERT Meetup](https://github.com/WissamAntoun/pydata_khobar_meetup) There are two versions of the model, AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were splitted using the [Farasa Segmenter](http://alt.qcri.org/farasa/segmenter.html). We evalaute AraBERT models on different downstream tasks and compare them to [mBERT]((https://github.com/google-research/bert/blob/master/multilingual.md)), and other state of the art models (*To the extent of our knowledge*). The Tasks were Sentiment Analysis on 6 different datasets ([HARD](https://github.com/elnagara/HARD-Arabic-Dataset), [ASTD-Balanced](https://www.aclweb.org/anthology/D15-1299), [ArsenTD-Lev](https://staff.aub.edu.lb/~we07/Publications/ArSentD-LEV_Sentiment_Corpus.pdf), [LABR](https://github.com/mohamedadaly/LABR)), Named Entity Recognition with the [ANERcorp](http://curtis.ml.cmu.edu/w/courses/index.php/ANERcorp), and Arabic Question Answering on [Arabic-SQuAD and ARCD](https://github.com/husseinmozannar/SOQAL) # AraBERTv2 ## What's New! AraBERT now comes in 4 new variants to replace the old v1 versions: More Detail in the AraBERT folder and in the [README](https://github.com/aub-mind/arabert/blob/master/AraBERT/README.md) and in the [AraBERT Paper](https://arxiv.org/abs/2003.00104v2) Model | HuggingFace Model Name | Size (MB/Params)| Pre-Segmentation | DataSet (Sentences/Size/nWords) | ---|:---:|:---:|:---:|:---: AraBERTv0.2-base | [bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) | 543MB / 136M | No | 200M / 77GB / 8.6B | AraBERTv0.2-large| [bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) | 1.38G 371M | No | 200M / 77GB / 8.6B | AraBERTv2-base| [bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) | 543MB 136M | Yes | 200M / 77GB / 8.6B | AraBERTv2-large| [bert-large-arabertv2](https://huggingface.co/aubmindlab/bert-large-arabertv2) | 1.38G 371M | Yes | 200M / 77GB / 8.6B | AraBERTv0.1-base| [bert-base-arabertv01](https://huggingface.co/aubmindlab/bert-base-arabertv01) | 543MB 136M | No | 77M / 23GB / 2.7B | AraBERTv1-base| [bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) | 543MB 136M | Yes | 77M / 23GB / 2.7B | All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats. ## Better Pre-Processing and New Vocab We identified an issue with AraBERTv1's wordpiece vocabulary. The issue came from punctuations and numbers that were still attached to words when learned the wordpiece vocab. We now insert a space between numbers and characters and around punctuation characters. The new vocabulary was learnt using the `BertWordpieceTokenizer` from the `tokenizers` library, and should now support the Fast tokenizer implementation from the `transformers` library. **P.S.**: All the old BERT codes should work with the new BERT, just change the model name and check the new preprocessing dunction **Please read the section on how to use the [preprocessing function](#Preprocessing)** ## Bigger Dataset and More Compute We used ~3.5 times more data, and trained for longer. For Dataset Sources see the [Dataset Section](#Dataset) Model | Hardware | num of examples with seq len (128 / 512) |128 (Batch Size/ Num of Steps) | 512 (Batch Size/ Num of Steps) | Total Steps | Total Time (in Days) | ---|:---:|:---:|:---:|:---:|:---:|:---: AraBERTv0.2-base | TPUv3-8 | 420M / 207M |2560 / 1M | 384/ 2M | 3M | - AraBERTv0.2-large | TPUv3-128 | 420M / 207M | 13440 / 250K | 2056 / 300K | 550K | - AraBERTv2-base | TPUv3-8 | 520M / 245M |13440 / 250K | 2056 / 300K | 550K | - AraBERTv2-large | TPUv3-128 | 520M / 245M | 13440 / 250K | 2056 / 300K | 550K | - AraBERT-base (v1/v0.1) | TPUv2-8 | - |512 / 900K | 128 / 300K| 1.2M | 4 days # Dataset The pretraining data used for the new AraBERT model is also used for Arabic **GPT2 and ELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for giving us the data # Preprocessing It is recommended to apply our preprocessing function before training/testing on any dataset. **Install farasapy to segment text for AraBERT v1 & v2 `pip install farasapy`** ```python from arabert.preprocess import ArabertPreprocessor model_name="bert-base-arabertv01" arabert_prep = ArabertPreprocessor(model_name=model_name) text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري" arabert_prep.preprocess(text) ``` ## Accepted_models ``` bert-base-arabertv01 bert-base-arabert bert-base-arabertv02 bert-base-arabertv2 bert-large-arabertv02 bert-large-arabertv2 araelectra-base aragpt2-base aragpt2-medium aragpt2-large aragpt2-mega ``` # TensorFlow 1.x models The TF1.x model are available in the HuggingFace models repo. You can download them as follows: - via git-lfs: clone all the models in a repo ```bash curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash sudo apt-get install git-lfs git lfs install git clone https://huggingface.co/aubmindlab/MODEL_NAME tar -C ./MODEL_NAME -zxvf /content/MODEL_NAME/tf1_model.tar.gz ``` where `MODEL_NAME` is any model under the `aubmindlab` name - via `wget`: - Go to the tf1_model.tar.gz file on huggingface.co/models/aubmindlab/MODEL_NAME. - copy the `oid sha256` - then run `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/INSERT_THE_SHA_HERE` (ex: for `aragpt2-base`: `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/3766fc03d7c2593ff2fb991d275e96b81b0ecb2098b71ff315611d052ce65248`) # If you used this model please cite us as : Google Scholar has our Bibtex wrong (missing name), use this instead ``` @inproceedings{antoun2020arabert, title={AraBERT: Transformer-based Model for Arabic Language Understanding}, author={Antoun, Wissam and Baly, Fady and Hajj, Hazem}, booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020}, pages={9} } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "datasets": ["wikipedia", "OSIAN", "1.5B_Arabic_Corpus"], "widget": [{"text": " \u0639\u0627\u0635\u0645\u0629 \u0644\u0628\u0646\u0627\u0646 \u0647\u064a [MASK] ."}]}
aubmindlab/bert-base-arabertv01
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "ar", "dataset:wikipedia", "dataset:OSIAN", "dataset:1.5B_Arabic_Corpus", "arxiv:2003.00104", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2003.00104" ]
[ "ar" ]
TAGS #transformers #pytorch #tf #jax #safetensors #bert #fill-mask #ar #dataset-wikipedia #dataset-OSIAN #dataset-1.5B_Arabic_Corpus #arxiv-2003.00104 #autotrain_compatible #endpoints_compatible #has_space #region-us
!!! A newer version of this model is available !!! AraBERTv02 ============================================================= AraBERT v1 & v2 : Pre-training BERT for Arabic Language Understanding ===================================================================== <img src="URL width="100" align="left"/> AraBERT is an Arabic pretrained lanaguage model based on Google's BERT architechture. AraBERT uses the same BERT-Base config. More details are available in the AraBERT Paper and in the AraBERT Meetup There are two versions of the model, AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were splitted using the Farasa Segmenter. We evalaute AraBERT models on different downstream tasks and compare them to mBERT), and other state of the art models (*To the extent of our knowledge*). The Tasks were Sentiment Analysis on 6 different datasets (HARD, ASTD-Balanced, ArsenTD-Lev, LABR), Named Entity Recognition with the ANERcorp, and Arabic Question Answering on Arabic-SQuAD and ARCD AraBERTv2 ========= What's New! ----------- AraBERT now comes in 4 new variants to replace the old v1 versions: More Detail in the AraBERT folder and in the README and in the AraBERT Paper All models are available in the 'HuggingFace' model page under the aubmindlab name. Checkpoints are available in PyTorch, TF2 and TF1 formats. Better Pre-Processing and New Vocab ----------------------------------- We identified an issue with AraBERTv1's wordpiece vocabulary. The issue came from punctuations and numbers that were still attached to words when learned the wordpiece vocab. We now insert a space between numbers and characters and around punctuation characters. The new vocabulary was learnt using the 'BertWordpieceTokenizer' from the 'tokenizers' library, and should now support the Fast tokenizer implementation from the 'transformers' library. P.S.: All the old BERT codes should work with the new BERT, just change the model name and check the new preprocessing dunction Please read the section on how to use the preprocessing function Bigger Dataset and More Compute ------------------------------- We used ~3.5 times more data, and trained for longer. For Dataset Sources see the Dataset Section Dataset ======= The pretraining data used for the new AraBERT model is also used for Arabic GPT2 and ELECTRA. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: * OSCAR unshuffled and filtered. * Arabic Wikipedia dump from 2020/09/01 * The 1.5B words Arabic Corpus * The OSIAN Corpus * Assafir news articles. Huge thank you for Assafir for giving us the data Preprocessing ============= It is recommended to apply our preprocessing function before training/testing on any dataset. Install farasapy to segment text for AraBERT v1 & v2 'pip install farasapy' Accepted\_models ---------------- TensorFlow 1.x models ===================== The TF1.x model are available in the HuggingFace models repo. You can download them as follows: * via git-lfs: clone all the models in a repo where 'MODEL\_NAME' is any model under the 'aubmindlab' name * via 'wget': + Go to the tf1\_model.URL file on URL + copy the 'oid sha256' + then run 'wget URL (ex: for 'aragpt2-base': 'wget URL If you used this model please cite us as : ========================================== Google Scholar has our Bibtex wrong (missing name), use this instead Acknowledgments =============== Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the AUB MIND Lab Members for the continous support. Also thanks to Yakshof and Assafir for data and storage access. Another thanks for Habib Rahal (URL for putting a face to AraBERT. Contacts ======== Wissam Antoun: Linkedin | Twitter | Github | [wfa07@URL](mailto:wfa07@URL) | [URL@URL](mailto:URL@URL) Fady Baly: Linkedin | Twitter | Github | [fgb06@URL](mailto:fgb06@URL) | [URL@URL](mailto:URL@URL)
[]
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #ar #dataset-wikipedia #dataset-OSIAN #dataset-1.5B_Arabic_Corpus #arxiv-2003.00104 #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
fill-mask
transformers
<img src="https://raw.githubusercontent.com/aub-mind/arabert/master/arabert_logo.png" width="100" align="center"/> # AraBERTv0.2-Twitter AraBERTv0.2-Twitter-base/large are two new models for Arabic dialects and tweets, trained by continuing the pre-training using the MLM task on ~60M Arabic tweets (filtered from a collection on 100M). The two new models have had emojies added to their vocabulary in addition to common words that weren't at first present. The pre-training was done with a max sentence length of 64 only for 1 epoch. **AraBERT** is an Arabic pretrained language model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERT uses the same BERT-Base config. More details are available in the [AraBERT Paper](https://arxiv.org/abs/2003.00104) and in the [AraBERT Meetup](https://github.com/WissamAntoun/pydata_khobar_meetup) ## Other Models Model | HuggingFace Model Name | Size (MB/Params)| Pre-Segmentation | DataSet (Sentences/Size/nWords) | ---|:---:|:---:|:---:|:---: AraBERTv0.2-base | [bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) | 543MB / 136M | No | 200M / 77GB / 8.6B | AraBERTv0.2-large| [bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) | 1.38G / 371M | No | 200M / 77GB / 8.6B | AraBERTv2-base| [bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) | 543MB / 136M | Yes | 200M / 77GB / 8.6B | AraBERTv2-large| [bert-large-arabertv2](https://huggingface.co/aubmindlab/bert-large-arabertv2) | 1.38G / 371M | Yes | 200M / 77GB / 8.6B | AraBERTv0.1-base| [bert-base-arabertv01](https://huggingface.co/aubmindlab/bert-base-arabertv01) | 543MB / 136M | No | 77M / 23GB / 2.7B | AraBERTv1-base| [bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) | 543MB / 136M | Yes | 77M / 23GB / 2.7B | AraBERTv0.2-Twitter-base| [bert-base-arabertv02-twitter](https://huggingface.co/aubmindlab/bert-base-arabertv02-twitter) | 543MB / 136M | No | Same as v02 + 60M Multi-Dialect Tweets| AraBERTv0.2-Twitter-large| [bert-large-arabertv02-twitter](https://huggingface.co/aubmindlab/bert-large-arabertv02-twitter) | 1.38G / 371M | No | Same as v02 + 60M Multi-Dialect Tweets| # Preprocessing **The model is trained on a sequence length of 64, using max length beyond 64 might result in degraded performance** It is recommended to apply our preprocessing function before training/testing on any dataset. The preprocessor will keep and space out emojis when used with a "twitter" model. ```python from arabert.preprocess import ArabertPreprocessor from transformers import AutoTokenizer, AutoModelForMaskedLM model_name="aubmindlab/bert-base-arabertv02-twitter" arabert_prep = ArabertPreprocessor(model_name=model_name) text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري" arabert_prep.preprocess(text) tokenizer = AutoTokenizer.from_pretrained("aubmindlab/bert-base-arabertv02-twitter") model = AutoModelForMaskedLM.from_pretrained("aubmindlab/bert-base-arabertv02-twitter") ``` # If you used this model please cite us as : Google Scholar has our Bibtex wrong (missing name), use this instead ``` @inproceedings{antoun2020arabert, title={AraBERT: Transformer-based Model for Arabic Language Understanding}, author={Antoun, Wissam and Baly, Fady and Hajj, Hazem}, booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020}, pages={9} } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continuous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "datasets": ["wikipedia", "Osian", "1.5B-Arabic-Corpus", "oscar-arabic-unshuffled", "Assafir(private)", "Twitter(private)"], "widget": [{"text": " \u0639\u0627\u0635\u0645\u0629 \u0644\u0628\u0646\u0627\u0646 \u0647\u064a [MASK] ."}]}
aubmindlab/bert-base-arabertv02-twitter
null
[ "transformers", "pytorch", "tensorboard", "safetensors", "bert", "fill-mask", "ar", "arxiv:2003.00104", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2003.00104" ]
[ "ar" ]
TAGS #transformers #pytorch #tensorboard #safetensors #bert #fill-mask #ar #arxiv-2003.00104 #autotrain_compatible #endpoints_compatible #has_space #region-us
<img src="URL width="100" align="center"/> AraBERTv0.2-Twitter =================== AraBERTv0.2-Twitter-base/large are two new models for Arabic dialects and tweets, trained by continuing the pre-training using the MLM task on ~60M Arabic tweets (filtered from a collection on 100M). The two new models have had emojies added to their vocabulary in addition to common words that weren't at first present. The pre-training was done with a max sentence length of 64 only for 1 epoch. AraBERT is an Arabic pretrained language model based on Google's BERT architechture. AraBERT uses the same BERT-Base config. More details are available in the AraBERT Paper and in the AraBERT Meetup Other Models ------------ Preprocessing ============= The model is trained on a sequence length of 64, using max length beyond 64 might result in degraded performance It is recommended to apply our preprocessing function before training/testing on any dataset. The preprocessor will keep and space out emojis when used with a "twitter" model. If you used this model please cite us as : ========================================== Google Scholar has our Bibtex wrong (missing name), use this instead Acknowledgments =============== Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the AUB MIND Lab Members for the continuous support. Also thanks to Yakshof and Assafir for data and storage access. Another thanks for Habib Rahal (URL for putting a face to AraBERT. Contacts ======== Wissam Antoun: Linkedin | Twitter | Github | [wfa07@URL](mailto:wfa07@URL) | [URL@URL](mailto:URL@URL) Fady Baly: Linkedin | Twitter | Github | [fgb06@URL](mailto:fgb06@URL) | [URL@URL](mailto:URL@URL)
[]
[ "TAGS\n#transformers #pytorch #tensorboard #safetensors #bert #fill-mask #ar #arxiv-2003.00104 #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
fill-mask
transformers
# AraBERT v1 & v2 : Pre-training BERT for Arabic Language Understanding <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/arabert_logo.png" width="100" align="left"/> **AraBERT** is an Arabic pretrained language model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERT uses the same BERT-Base config. More details are available in the [AraBERT Paper](https://arxiv.org/abs/2003.00104) and in the [AraBERT Meetup](https://github.com/WissamAntoun/pydata_khobar_meetup) There are two versions of the model, AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were split using the [Farasa Segmenter](http://alt.qcri.org/farasa/segmenter.html). We evaluate AraBERT models on different downstream tasks and compare them to [mBERT]((https://github.com/google-research/bert/blob/master/multilingual.md)), and other state of the art models (*To the extent of our knowledge*). The Tasks were Sentiment Analysis on 6 different datasets ([HARD](https://github.com/elnagara/HARD-Arabic-Dataset), [ASTD-Balanced](https://www.aclweb.org/anthology/D15-1299), [ArsenTD-Lev](https://staff.aub.edu.lb/~we07/Publications/ArSentD-LEV_Sentiment_Corpus.pdf), [LABR](https://github.com/mohamedadaly/LABR)), Named Entity Recognition with the [ANERcorp](http://curtis.ml.cmu.edu/w/courses/index.php/ANERcorp), and Arabic Question Answering on [Arabic-SQuAD and ARCD](https://github.com/husseinmozannar/SOQAL) # AraBERTv2 ## What's New! AraBERT now comes in 4 new variants to replace the old v1 versions: More Detail in the AraBERT folder and in the [README](https://github.com/aub-mind/arabert/blob/master/AraBERT/README.md) and in the [AraBERT Paper](https://arxiv.org/abs/2003.00104v2) Model | HuggingFace Model Name | Size (MB/Params)| Pre-Segmentation | DataSet (Sentences/Size/nWords) | ---|:---:|:---:|:---:|:---: AraBERTv0.2-base | [bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) | 543MB / 136M | No | 200M / 77GB / 8.6B | AraBERTv0.2-large| [bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) | 1.38G 371M | No | 200M / 77GB / 8.6B | AraBERTv2-base| [bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) | 543MB 136M | Yes | 200M / 77GB / 8.6B | AraBERTv2-large| [bert-large-arabertv2](https://huggingface.co/aubmindlab/bert-large-arabertv2) | 1.38G 371M | Yes | 200M / 77GB / 8.6B | AraBERTv0.2-Twitter-base| [bert-base-arabertv02-twitter](https://huggingface.co/aubmindlab/bert-base-arabertv02-twitter) | 543MB / 136M | No | Same as v02 + 60M Multi-Dialect Tweets| AraBERTv0.2-Twitter-large| [bert-large-arabertv02-twitter](https://huggingface.co/aubmindlab/bert-large-arabertv02-twitter) | 1.38G / 371M | No | Same as v02 + 60M Multi-Dialect Tweets| AraBERTv0.1-base| [bert-base-arabertv01](https://huggingface.co/aubmindlab/bert-base-arabertv01) | 543MB 136M | No | 77M / 23GB / 2.7B | AraBERTv1-base| [bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) | 543MB 136M | Yes | 77M / 23GB / 2.7B | All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats. ## Better Pre-Processing and New Vocab We identified an issue with AraBERTv1's wordpiece vocabulary. The issue came from punctuations and numbers that were still attached to words when learned the wordpiece vocab. We now insert a space between numbers and characters and around punctuation characters. The new vocabulary was learned using the `BertWordpieceTokenizer` from the `tokenizers` library, and should now support the Fast tokenizer implementation from the `transformers` library. **P.S.**: All the old BERT codes should work with the new BERT, just change the model name and check the new preprocessing function **Please read the section on how to use the [preprocessing function](#Preprocessing)** ## Bigger Dataset and More Compute We used ~3.5 times more data, and trained for longer. For Dataset Sources see the [Dataset Section](#Dataset) Model | Hardware | num of examples with seq len (128 / 512) |128 (Batch Size/ Num of Steps) | 512 (Batch Size/ Num of Steps) | Total Steps | Total Time (in Days) | ---|:---:|:---:|:---:|:---:|:---:|:---: AraBERTv0.2-base | TPUv3-8 | 420M / 207M | 2560 / 1M | 384/ 2M | 3M | - AraBERTv0.2-large | TPUv3-128 | 420M / 207M | 13440 / 250K | 2056 / 300K | 550K | 7 AraBERTv2-base | TPUv3-8 | 420M / 207M | 2560 / 1M | 384/ 2M | 3M | - AraBERTv2-large | TPUv3-128 | 520M / 245M | 13440 / 250K | 2056 / 300K | 550K | 7 AraBERT-base (v1/v0.1) | TPUv2-8 | - |512 / 900K | 128 / 300K| 1.2M | 4 # Dataset The pretraining data used for the new AraBERT model is also used for Arabic **GPT2 and ELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for providing us the data # Preprocessing It is recommended to apply our preprocessing function before training/testing on any dataset. **Install the arabert python package to segment text for AraBERT v1 & v2 or to clean your data `pip install arabert`** ```python from arabert.preprocess import ArabertPreprocessor model_name="aubmindlab/bert-large-arabertv02" arabert_prep = ArabertPreprocessor(model_name=model_name) text = "ولن نبالغ إذا قلنا: إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري" arabert_prep.preprocess(text) >>> output: ولن نبالغ إذا قلنا : إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري ``` # TensorFlow 1.x models The TF1.x model are available in the HuggingFace models repo. You can download them as follows: - via git-lfs: clone all the models in a repo ```bash curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash sudo apt-get install git-lfs git lfs install git clone https://huggingface.co/aubmindlab/MODEL_NAME tar -C ./MODEL_NAME -zxvf /content/MODEL_NAME/tf1_model.tar.gz ``` where `MODEL_NAME` is any model under the `aubmindlab` name - via `wget`: - Go to the tf1_model.tar.gz file on huggingface.co/models/aubmindlab/MODEL_NAME. - copy the `oid sha256` - then run `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/INSERT_THE_SHA_HERE` (ex: for `aragpt2-base`: `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/3766fc03d7c2593ff2fb991d275e96b81b0ecb2098b71ff315611d052ce65248`) # If you used this model please cite us as : Google Scholar has our Bibtex wrong (missing name), use this instead ``` @inproceedings{antoun2020arabert, title={AraBERT: Transformer-based Model for Arabic Language Understanding}, author={Antoun, Wissam and Baly, Fady and Hajj, Hazem}, booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020}, pages={9} } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continuous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "datasets": ["wikipedia", "Osian", "1.5B-Arabic-Corpus", "oscar-arabic-unshuffled", "Assafir(private)"], "widget": [{"text": " \u0639\u0627\u0635\u0645\u0629 \u0644\u0628\u0646\u0627\u0646 \u0647\u064a [MASK] ."}]}
aubmindlab/bert-base-arabertv02
null
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "safetensors", "bert", "fill-mask", "ar", "arxiv:2003.00104", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2003.00104" ]
[ "ar" ]
TAGS #transformers #pytorch #tf #jax #tensorboard #safetensors #bert #fill-mask #ar #arxiv-2003.00104 #autotrain_compatible #endpoints_compatible #has_space #region-us
AraBERT v1 & v2 : Pre-training BERT for Arabic Language Understanding ===================================================================== <img src="URL width="100" align="left"/> AraBERT is an Arabic pretrained language model based on Google's BERT architechture. AraBERT uses the same BERT-Base config. More details are available in the AraBERT Paper and in the AraBERT Meetup There are two versions of the model, AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were split using the Farasa Segmenter. We evaluate AraBERT models on different downstream tasks and compare them to mBERT), and other state of the art models (*To the extent of our knowledge*). The Tasks were Sentiment Analysis on 6 different datasets (HARD, ASTD-Balanced, ArsenTD-Lev, LABR), Named Entity Recognition with the ANERcorp, and Arabic Question Answering on Arabic-SQuAD and ARCD AraBERTv2 ========= What's New! ----------- AraBERT now comes in 4 new variants to replace the old v1 versions: More Detail in the AraBERT folder and in the README and in the AraBERT Paper All models are available in the 'HuggingFace' model page under the aubmindlab name. Checkpoints are available in PyTorch, TF2 and TF1 formats. Better Pre-Processing and New Vocab ----------------------------------- We identified an issue with AraBERTv1's wordpiece vocabulary. The issue came from punctuations and numbers that were still attached to words when learned the wordpiece vocab. We now insert a space between numbers and characters and around punctuation characters. The new vocabulary was learned using the 'BertWordpieceTokenizer' from the 'tokenizers' library, and should now support the Fast tokenizer implementation from the 'transformers' library. P.S.: All the old BERT codes should work with the new BERT, just change the model name and check the new preprocessing function Please read the section on how to use the preprocessing function Bigger Dataset and More Compute ------------------------------- We used ~3.5 times more data, and trained for longer. For Dataset Sources see the Dataset Section Dataset ======= The pretraining data used for the new AraBERT model is also used for Arabic GPT2 and ELECTRA. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: * OSCAR unshuffled and filtered. * Arabic Wikipedia dump from 2020/09/01 * The 1.5B words Arabic Corpus * The OSIAN Corpus * Assafir news articles. Huge thank you for Assafir for providing us the data Preprocessing ============= It is recommended to apply our preprocessing function before training/testing on any dataset. Install the arabert python package to segment text for AraBERT v1 & v2 or to clean your data 'pip install arabert' TensorFlow 1.x models ===================== The TF1.x model are available in the HuggingFace models repo. You can download them as follows: * via git-lfs: clone all the models in a repo where 'MODEL\_NAME' is any model under the 'aubmindlab' name * via 'wget': + Go to the tf1\_model.URL file on URL + copy the 'oid sha256' + then run 'wget URL (ex: for 'aragpt2-base': 'wget URL If you used this model please cite us as : ========================================== Google Scholar has our Bibtex wrong (missing name), use this instead Acknowledgments =============== Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the AUB MIND Lab Members for the continuous support. Also thanks to Yakshof and Assafir for data and storage access. Another thanks for Habib Rahal (URL for putting a face to AraBERT. Contacts ======== Wissam Antoun: Linkedin | Twitter | Github | [wfa07@URL](mailto:wfa07@URL) | [URL@URL](mailto:URL@URL) Fady Baly: Linkedin | Twitter | Github | [fgb06@URL](mailto:fgb06@URL) | [URL@URL](mailto:URL@URL)
[]
[ "TAGS\n#transformers #pytorch #tf #jax #tensorboard #safetensors #bert #fill-mask #ar #arxiv-2003.00104 #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
fill-mask
transformers
# AraBERT v1 & v2 : Pre-training BERT for Arabic Language Understanding <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/arabert_logo.png" width="100" align="left"/> **AraBERT** is an Arabic pretrained lanaguage model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERT uses the same BERT-Base config. More details are available in the [AraBERT Paper](https://arxiv.org/abs/2003.00104) and in the [AraBERT Meetup](https://github.com/WissamAntoun/pydata_khobar_meetup) There are two versions of the model, AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were splitted using the [Farasa Segmenter](http://alt.qcri.org/farasa/segmenter.html). We evalaute AraBERT models on different downstream tasks and compare them to [mBERT]((https://github.com/google-research/bert/blob/master/multilingual.md)), and other state of the art models (*To the extent of our knowledge*). The Tasks were Sentiment Analysis on 6 different datasets ([HARD](https://github.com/elnagara/HARD-Arabic-Dataset), [ASTD-Balanced](https://www.aclweb.org/anthology/D15-1299), [ArsenTD-Lev](https://staff.aub.edu.lb/~we07/Publications/ArSentD-LEV_Sentiment_Corpus.pdf), [LABR](https://github.com/mohamedadaly/LABR)), Named Entity Recognition with the [ANERcorp](http://curtis.ml.cmu.edu/w/courses/index.php/ANERcorp), and Arabic Question Answering on [Arabic-SQuAD and ARCD](https://github.com/husseinmozannar/SOQAL) # AraBERTv2 ## What's New! AraBERT now comes in 4 new variants to replace the old v1 versions: More Detail in the AraBERT folder and in the [README](https://github.com/aub-mind/arabert/blob/master/AraBERT/README.md) and in the [AraBERT Paper](https://arxiv.org/abs/2003.00104v2) Model | HuggingFace Model Name | Size (MB/Params)| Pre-Segmentation | DataSet (Sentences/Size/nWords) | ---|:---:|:---:|:---:|:---: AraBERTv0.2-base | [bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) | 543MB / 136M | No | 200M / 77GB / 8.6B | AraBERTv0.2-large| [bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) | 1.38G 371M | No | 200M / 77GB / 8.6B | AraBERTv2-base| [bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) | 543MB 136M | Yes | 200M / 77GB / 8.6B | AraBERTv2-large| [bert-large-arabertv2](https://huggingface.co/aubmindlab/bert-large-arabertv2) | 1.38G 371M | Yes | 200M / 77GB / 8.6B | AraBERTv0.1-base| [bert-base-arabertv01](https://huggingface.co/aubmindlab/bert-base-arabertv01) | 543MB 136M | No | 77M / 23GB / 2.7B | AraBERTv1-base| [bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) | 543MB 136M | Yes | 77M / 23GB / 2.7B | All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats. ## Better Pre-Processing and New Vocab We identified an issue with AraBERTv1's wordpiece vocabulary. The issue came from punctuations and numbers that were still attached to words when learned the wordpiece vocab. We now insert a space between numbers and characters and around punctuation characters. The new vocabulary was learnt using the `BertWordpieceTokenizer` from the `tokenizers` library, and should now support the Fast tokenizer implementation from the `transformers` library. **P.S.**: All the old BERT codes should work with the new BERT, just change the model name and check the new preprocessing dunction **Please read the section on how to use the [preprocessing function](#Preprocessing)** ## Bigger Dataset and More Compute We used ~3.5 times more data, and trained for longer. For Dataset Sources see the [Dataset Section](#Dataset) Model | Hardware | num of examples with seq len (128 / 512) |128 (Batch Size/ Num of Steps) | 512 (Batch Size/ Num of Steps) | Total Steps | Total Time (in Days) | ---|:---:|:---:|:---:|:---:|:---:|:---: AraBERTv0.2-base | TPUv3-8 | 420M / 207M | 2560 / 1M | 384/ 2M | 3M | - AraBERTv0.2-large | TPUv3-128 | 420M / 207M | 13440 / 250K | 2056 / 300K | 550K | 7 AraBERTv2-base | TPUv3-8 | 420M / 207M | 2560 / 1M | 384/ 2M | 3M | - AraBERTv2-large | TPUv3-128 | 520M / 245M | 13440 / 250K | 2056 / 300K | 550K | 7 AraBERT-base (v1/v0.1) | TPUv2-8 | - |512 / 900K | 128 / 300K| 1.2M | 4 # Dataset The pretraining data used for the new AraBERT model is also used for Arabic **AraGPT2 and AraELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for giving us the data # Preprocessing It is recommended to apply our preprocessing function before training/testing on any dataset. **Install farasapy to segment text for AraBERT v1 & v2 `pip install farasapy`** ```python from arabert.preprocess import ArabertPreprocessor model_name="bert-base-arabertv2" arabert_prep = ArabertPreprocessor(model_name=model_name) text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري" arabert_prep.preprocess(text) >>>"و+ لن نبالغ إذا قل +نا إن هاتف أو كمبيوتر ال+ مكتب في زمن +نا هذا ضروري" ``` ## Accepted_models ``` bert-base-arabertv01 bert-base-arabert bert-base-arabertv02 bert-base-arabertv2 bert-large-arabertv02 bert-large-arabertv2 araelectra-base aragpt2-base aragpt2-medium aragpt2-large aragpt2-mega ``` # TensorFlow 1.x models The TF1.x model are available in the HuggingFace models repo. You can download them as follows: - via git-lfs: clone all the models in a repo ```bash curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash sudo apt-get install git-lfs git lfs install git clone https://huggingface.co/aubmindlab/MODEL_NAME tar -C ./MODEL_NAME -zxvf /content/MODEL_NAME/tf1_model.tar.gz ``` where `MODEL_NAME` is any model under the `aubmindlab` name - via `wget`: - Go to the tf1_model.tar.gz file on huggingface.co/models/aubmindlab/MODEL_NAME. - copy the `oid sha256` - then run `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/INSERT_THE_SHA_HERE` (ex: for `aragpt2-base`: `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/3766fc03d7c2593ff2fb991d275e96b81b0ecb2098b71ff315611d052ce65248`) # If you used this model please cite us as : Google Scholar has our Bibtex wrong (missing name), use this instead ``` @inproceedings{antoun2020arabert, title={AraBERT: Transformer-based Model for Arabic Language Understanding}, author={Antoun, Wissam and Baly, Fady and Hajj, Hazem}, booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020}, pages={9} } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "datasets": ["wikipedia", "Osian", "1.5B-Arabic-Corpus", "oscar-arabic-unshuffled"], "widget": [{"text": " \u0639\u0627\u0635\u0645 +\u0629 \u0644\u0628\u0646\u0627\u0646 \u0647\u064a [MASK] ."}]}
aubmindlab/bert-base-arabertv2
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "ar", "dataset:wikipedia", "dataset:Osian", "dataset:1.5B-Arabic-Corpus", "dataset:oscar-arabic-unshuffled", "arxiv:2003.00104", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2003.00104" ]
[ "ar" ]
TAGS #transformers #pytorch #tf #jax #safetensors #bert #fill-mask #ar #dataset-wikipedia #dataset-Osian #dataset-1.5B-Arabic-Corpus #dataset-oscar-arabic-unshuffled #arxiv-2003.00104 #autotrain_compatible #endpoints_compatible #has_space #region-us
AraBERT v1 & v2 : Pre-training BERT for Arabic Language Understanding ===================================================================== <img src="URL width="100" align="left"/> AraBERT is an Arabic pretrained lanaguage model based on Google's BERT architechture. AraBERT uses the same BERT-Base config. More details are available in the AraBERT Paper and in the AraBERT Meetup There are two versions of the model, AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were splitted using the Farasa Segmenter. We evalaute AraBERT models on different downstream tasks and compare them to mBERT), and other state of the art models (*To the extent of our knowledge*). The Tasks were Sentiment Analysis on 6 different datasets (HARD, ASTD-Balanced, ArsenTD-Lev, LABR), Named Entity Recognition with the ANERcorp, and Arabic Question Answering on Arabic-SQuAD and ARCD AraBERTv2 ========= What's New! ----------- AraBERT now comes in 4 new variants to replace the old v1 versions: More Detail in the AraBERT folder and in the README and in the AraBERT Paper All models are available in the 'HuggingFace' model page under the aubmindlab name. Checkpoints are available in PyTorch, TF2 and TF1 formats. Better Pre-Processing and New Vocab ----------------------------------- We identified an issue with AraBERTv1's wordpiece vocabulary. The issue came from punctuations and numbers that were still attached to words when learned the wordpiece vocab. We now insert a space between numbers and characters and around punctuation characters. The new vocabulary was learnt using the 'BertWordpieceTokenizer' from the 'tokenizers' library, and should now support the Fast tokenizer implementation from the 'transformers' library. P.S.: All the old BERT codes should work with the new BERT, just change the model name and check the new preprocessing dunction Please read the section on how to use the preprocessing function Bigger Dataset and More Compute ------------------------------- We used ~3.5 times more data, and trained for longer. For Dataset Sources see the Dataset Section Dataset ======= The pretraining data used for the new AraBERT model is also used for Arabic AraGPT2 and AraELECTRA. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: * OSCAR unshuffled and filtered. * Arabic Wikipedia dump from 2020/09/01 * The 1.5B words Arabic Corpus * The OSIAN Corpus * Assafir news articles. Huge thank you for Assafir for giving us the data Preprocessing ============= It is recommended to apply our preprocessing function before training/testing on any dataset. Install farasapy to segment text for AraBERT v1 & v2 'pip install farasapy' Accepted\_models ---------------- TensorFlow 1.x models ===================== The TF1.x model are available in the HuggingFace models repo. You can download them as follows: * via git-lfs: clone all the models in a repo where 'MODEL\_NAME' is any model under the 'aubmindlab' name * via 'wget': + Go to the tf1\_model.URL file on URL + copy the 'oid sha256' + then run 'wget URL (ex: for 'aragpt2-base': 'wget URL If you used this model please cite us as : ========================================== Google Scholar has our Bibtex wrong (missing name), use this instead Acknowledgments =============== Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the AUB MIND Lab Members for the continous support. Also thanks to Yakshof and Assafir for data and storage access. Another thanks for Habib Rahal (URL for putting a face to AraBERT. Contacts ======== Wissam Antoun: Linkedin | Twitter | Github | [wfa07@URL](mailto:wfa07@URL) | [URL@URL](mailto:URL@URL) Fady Baly: Linkedin | Twitter | Github | [fgb06@URL](mailto:fgb06@URL) | [URL@URL](mailto:URL@URL)
[]
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #ar #dataset-wikipedia #dataset-Osian #dataset-1.5B-Arabic-Corpus #dataset-oscar-arabic-unshuffled #arxiv-2003.00104 #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
fill-mask
transformers
<img src="https://raw.githubusercontent.com/aub-mind/arabert/master/arabert_logo.png" width="100" align="center"/> # AraBERTv0.2-Twitter AraBERTv0.2-Twitter-base/large are two new models for Arabic dialects and tweets, trained by continuing the pre-training using the MLM task on ~60M Arabic tweets (filtered from a collection on 100M). The two new models have had emojies added to their vocabulary in addition to common words that weren't at first present. The pre-training was done with a max sentence length of 64 only for 1 epoch. **AraBERT** is an Arabic pretrained language model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERT uses the same BERT-Base config. More details are available in the [AraBERT Paper](https://arxiv.org/abs/2003.00104) and in the [AraBERT Meetup](https://github.com/WissamAntoun/pydata_khobar_meetup) ## Other Models Model | HuggingFace Model Name | Size (MB/Params)| Pre-Segmentation | DataSet (Sentences/Size/nWords) | ---|:---:|:---:|:---:|:---: AraBERTv0.2-base | [bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) | 543MB / 136M | No | 200M / 77GB / 8.6B | AraBERTv0.2-large| [bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) | 1.38G / 371M | No | 200M / 77GB / 8.6B | AraBERTv2-base| [bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) | 543MB / 136M | Yes | 200M / 77GB / 8.6B | AraBERTv2-large| [bert-large-arabertv2](https://huggingface.co/aubmindlab/bert-large-arabertv2) | 1.38G / 371M | Yes | 200M / 77GB / 8.6B | AraBERTv0.1-base| [bert-base-arabertv01](https://huggingface.co/aubmindlab/bert-base-arabertv01) | 543MB / 136M | No | 77M / 23GB / 2.7B | AraBERTv1-base| [bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) | 543MB / 136M | Yes | 77M / 23GB / 2.7B | AraBERTv0.2-Twitter-base| [bert-base-arabertv02-twitter](https://huggingface.co/aubmindlab/bert-base-arabertv02-twitter) | 543MB / 136M | No | Same as v02 + 60M Multi-Dialect Tweets| AraBERTv0.2-Twitter-large| [bert-large-arabertv02-twitter](https://huggingface.co/aubmindlab/bert-large-arabertv02-twitter) | 1.38G / 371M | No | Same as v02 + 60M Multi-Dialect Tweets| # Preprocessing **The model is trained on a sequence length of 64, using max length beyond 64 might result in degraded performance** It is recommended to apply our preprocessing function before training/testing on any dataset. The preprocessor will keep and space out emojis when used with a "twitter" model. ```python from arabert.preprocess import ArabertPreprocessor from transformers import AutoTokenizer, AutoModelForMaskedLM model_name="aubmindlab/bert-base-arabertv02-twitter" arabert_prep = ArabertPreprocessor(model_name=model_name) text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري" arabert_prep.preprocess(text) tokenizer = AutoTokenizer.from_pretrained("aubmindlab/bert-base-arabertv02-twitter") model = AutoModelForMaskedLM.from_pretrained("aubmindlab/bert-base-arabertv02-twitter") ``` # If you used this model please cite us as : Google Scholar has our Bibtex wrong (missing name), use this instead ``` @inproceedings{antoun2020arabert, title={AraBERT: Transformer-based Model for Arabic Language Understanding}, author={Antoun, Wissam and Baly, Fady and Hajj, Hazem}, booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020}, pages={9} } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continuous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "datasets": ["wikipedia", "Osian", "1.5B-Arabic-Corpus", "oscar-arabic-unshuffled", "Assafir(private)", "Twitter(private)"], "widget": [{"text": " \u0639\u0627\u0635\u0645\u0629 \u0644\u0628\u0646\u0627\u0646 \u0647\u064a [MASK] ."}]}
aubmindlab/bert-large-arabertv02-twitter
null
[ "transformers", "pytorch", "tensorboard", "safetensors", "bert", "fill-mask", "ar", "arxiv:2003.00104", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2003.00104" ]
[ "ar" ]
TAGS #transformers #pytorch #tensorboard #safetensors #bert #fill-mask #ar #arxiv-2003.00104 #autotrain_compatible #endpoints_compatible #has_space #region-us
<img src="URL width="100" align="center"/> AraBERTv0.2-Twitter =================== AraBERTv0.2-Twitter-base/large are two new models for Arabic dialects and tweets, trained by continuing the pre-training using the MLM task on ~60M Arabic tweets (filtered from a collection on 100M). The two new models have had emojies added to their vocabulary in addition to common words that weren't at first present. The pre-training was done with a max sentence length of 64 only for 1 epoch. AraBERT is an Arabic pretrained language model based on Google's BERT architechture. AraBERT uses the same BERT-Base config. More details are available in the AraBERT Paper and in the AraBERT Meetup Other Models ------------ Preprocessing ============= The model is trained on a sequence length of 64, using max length beyond 64 might result in degraded performance It is recommended to apply our preprocessing function before training/testing on any dataset. The preprocessor will keep and space out emojis when used with a "twitter" model. If you used this model please cite us as : ========================================== Google Scholar has our Bibtex wrong (missing name), use this instead Acknowledgments =============== Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the AUB MIND Lab Members for the continuous support. Also thanks to Yakshof and Assafir for data and storage access. Another thanks for Habib Rahal (URL for putting a face to AraBERT. Contacts ======== Wissam Antoun: Linkedin | Twitter | Github | [wfa07@URL](mailto:wfa07@URL) | [URL@URL](mailto:URL@URL) Fady Baly: Linkedin | Twitter | Github | [fgb06@URL](mailto:fgb06@URL) | [URL@URL](mailto:URL@URL)
[]
[ "TAGS\n#transformers #pytorch #tensorboard #safetensors #bert #fill-mask #ar #arxiv-2003.00104 #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
fill-mask
transformers
# AraBERT v1 & v2 : Pre-training BERT for Arabic Language Understanding <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/arabert_logo.png" width="100" align="left"/> **AraBERT** is an Arabic pretrained lanaguage model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERT uses the same BERT-Base config. More details are available in the [AraBERT Paper](https://arxiv.org/abs/2003.00104) and in the [AraBERT Meetup](https://github.com/WissamAntoun/pydata_khobar_meetup) There are two versions of the model, AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were splitted using the [Farasa Segmenter](http://alt.qcri.org/farasa/segmenter.html). We evalaute AraBERT models on different downstream tasks and compare them to [mBERT]((https://github.com/google-research/bert/blob/master/multilingual.md)), and other state of the art models (*To the extent of our knowledge*). The Tasks were Sentiment Analysis on 6 different datasets ([HARD](https://github.com/elnagara/HARD-Arabic-Dataset), [ASTD-Balanced](https://www.aclweb.org/anthology/D15-1299), [ArsenTD-Lev](https://staff.aub.edu.lb/~we07/Publications/ArSentD-LEV_Sentiment_Corpus.pdf), [LABR](https://github.com/mohamedadaly/LABR)), Named Entity Recognition with the [ANERcorp](http://curtis.ml.cmu.edu/w/courses/index.php/ANERcorp), and Arabic Question Answering on [Arabic-SQuAD and ARCD](https://github.com/husseinmozannar/SOQAL) # AraBERTv2 ## What's New! AraBERT now comes in 4 new variants to replace the old v1 versions: More Detail in the AraBERT folder and in the [README](https://github.com/aub-mind/arabert/blob/master/AraBERT/README.md) and in the [AraBERT Paper](https://arxiv.org/abs/2003.00104v2) Model | HuggingFace Model Name | Size (MB/Params)| Pre-Segmentation | DataSet (Sentences/Size/nWords) | ---|:---:|:---:|:---:|:---: AraBERTv0.2-base | [bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) | 543MB / 136M | No | 200M / 77GB / 8.6B | AraBERTv0.2-large| [bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) | 1.38G 371M | No | 200M / 77GB / 8.6B | AraBERTv2-base| [bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) | 543MB 136M | Yes | 200M / 77GB / 8.6B | AraBERTv2-large| [bert-large-arabertv2](https://huggingface.co/aubmindlab/bert-large-arabertv2) | 1.38G 371M | Yes | 200M / 77GB / 8.6B | AraBERTv0.1-base| [bert-base-arabertv01](https://huggingface.co/aubmindlab/bert-base-arabertv01) | 543MB 136M | No | 77M / 23GB / 2.7B | AraBERTv1-base| [bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) | 543MB 136M | Yes | 77M / 23GB / 2.7B | All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats. ## Better Pre-Processing and New Vocab We identified an issue with AraBERTv1's wordpiece vocabulary. The issue came from punctuations and numbers that were still attached to words when learned the wordpiece vocab. We now insert a space between numbers and characters and around punctuation characters. The new vocabulary was learnt using the `BertWordpieceTokenizer` from the `tokenizers` library, and should now support the Fast tokenizer implementation from the `transformers` library. **P.S.**: All the old BERT codes should work with the new BERT, just change the model name and check the new preprocessing dunction **Please read the section on how to use the [preprocessing function](#Preprocessing)** ## Bigger Dataset and More Compute We used ~3.5 times more data, and trained for longer. For Dataset Sources see the [Dataset Section](#Dataset) Model | Hardware | num of examples with seq len (128 / 512) |128 (Batch Size/ Num of Steps) | 512 (Batch Size/ Num of Steps) | Total Steps | Total Time (in Days) | ---|:---:|:---:|:---:|:---:|:---:|:---: AraBERTv0.2-base | TPUv3-8 | 420M / 207M | 2560 / 1M | 384/ 2M | 3M | - AraBERTv0.2-large | TPUv3-128 | 420M / 207M | 13440 / 250K | 2056 / 300K | 550K | 7 AraBERTv2-base | TPUv3-8 | 420M / 207M | 2560 / 1M | 384/ 2M | 3M | - AraBERTv2-large | TPUv3-128 | 520M / 245M | 13440 / 250K | 2056 / 300K | 550K | 7 AraBERT-base (v1/v0.1) | TPUv2-8 | - |512 / 900K | 128 / 300K| 1.2M | 4 # Dataset The pretraining data used for the new AraBERT model is also used for Arabic **GPT2 and ELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for giving us the data # Preprocessing It is recommended to apply our preprocessing function before training/testing on any dataset. **Install farasapy to segment text for AraBERT v1 & v2 `pip install farasapy`** ```python from arabert.preprocess import ArabertPreprocessor model_name="bert-large-arabertv02" arabert_prep = ArabertPreprocessor(model_name=model_name) text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري" arabert_prep.preprocess(text) ``` ## Accepted_models ``` bert-base-arabertv01 bert-base-arabert bert-base-arabertv02 bert-base-arabertv2 bert-large-arabertv02 bert-large-arabertv2 araelectra-base aragpt2-base aragpt2-medium aragpt2-large aragpt2-mega ``` # TensorFlow 1.x models The TF1.x model are available in the HuggingFace models repo. You can download them as follows: - via git-lfs: clone all the models in a repo ```bash curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash sudo apt-get install git-lfs git lfs install git clone https://huggingface.co/aubmindlab/MODEL_NAME tar -C ./MODEL_NAME -zxvf /content/MODEL_NAME/tf1_model.tar.gz ``` where `MODEL_NAME` is any model under the `aubmindlab` name - via `wget`: - Go to the tf1_model.tar.gz file on huggingface.co/models/aubmindlab/MODEL_NAME. - copy the `oid sha256` - then run `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/INSERT_THE_SHA_HERE` (ex: for `aragpt2-base`: `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/3766fc03d7c2593ff2fb991d275e96b81b0ecb2098b71ff315611d052ce65248`) # If you used this model please cite us as : Google Scholar has our Bibtex wrong (missing name), use this instead ``` @inproceedings{antoun2020arabert, title={AraBERT: Transformer-based Model for Arabic Language Understanding}, author={Antoun, Wissam and Baly, Fady and Hajj, Hazem}, booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020}, pages={9} } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "datasets": ["wikipedia", "Osian", "1.5B-Arabic-Corpus", "oscar-arabic-unshuffled"], "widget": [{"text": " \u0639\u0627\u0635\u0645\u0629 \u0644\u0628\u0646\u0627\u0646 \u0647\u064a [MASK] ."}]}
aubmindlab/bert-large-arabertv02
null
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "safetensors", "bert", "fill-mask", "ar", "dataset:wikipedia", "dataset:Osian", "dataset:1.5B-Arabic-Corpus", "dataset:oscar-arabic-unshuffled", "arxiv:2003.00104", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2003.00104" ]
[ "ar" ]
TAGS #transformers #pytorch #tf #jax #tensorboard #safetensors #bert #fill-mask #ar #dataset-wikipedia #dataset-Osian #dataset-1.5B-Arabic-Corpus #dataset-oscar-arabic-unshuffled #arxiv-2003.00104 #autotrain_compatible #endpoints_compatible #has_space #region-us
AraBERT v1 & v2 : Pre-training BERT for Arabic Language Understanding ===================================================================== <img src="URL width="100" align="left"/> AraBERT is an Arabic pretrained lanaguage model based on Google's BERT architechture. AraBERT uses the same BERT-Base config. More details are available in the AraBERT Paper and in the AraBERT Meetup There are two versions of the model, AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were splitted using the Farasa Segmenter. We evalaute AraBERT models on different downstream tasks and compare them to mBERT), and other state of the art models (*To the extent of our knowledge*). The Tasks were Sentiment Analysis on 6 different datasets (HARD, ASTD-Balanced, ArsenTD-Lev, LABR), Named Entity Recognition with the ANERcorp, and Arabic Question Answering on Arabic-SQuAD and ARCD AraBERTv2 ========= What's New! ----------- AraBERT now comes in 4 new variants to replace the old v1 versions: More Detail in the AraBERT folder and in the README and in the AraBERT Paper All models are available in the 'HuggingFace' model page under the aubmindlab name. Checkpoints are available in PyTorch, TF2 and TF1 formats. Better Pre-Processing and New Vocab ----------------------------------- We identified an issue with AraBERTv1's wordpiece vocabulary. The issue came from punctuations and numbers that were still attached to words when learned the wordpiece vocab. We now insert a space between numbers and characters and around punctuation characters. The new vocabulary was learnt using the 'BertWordpieceTokenizer' from the 'tokenizers' library, and should now support the Fast tokenizer implementation from the 'transformers' library. P.S.: All the old BERT codes should work with the new BERT, just change the model name and check the new preprocessing dunction Please read the section on how to use the preprocessing function Bigger Dataset and More Compute ------------------------------- We used ~3.5 times more data, and trained for longer. For Dataset Sources see the Dataset Section Dataset ======= The pretraining data used for the new AraBERT model is also used for Arabic GPT2 and ELECTRA. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: * OSCAR unshuffled and filtered. * Arabic Wikipedia dump from 2020/09/01 * The 1.5B words Arabic Corpus * The OSIAN Corpus * Assafir news articles. Huge thank you for Assafir for giving us the data Preprocessing ============= It is recommended to apply our preprocessing function before training/testing on any dataset. Install farasapy to segment text for AraBERT v1 & v2 'pip install farasapy' Accepted\_models ---------------- TensorFlow 1.x models ===================== The TF1.x model are available in the HuggingFace models repo. You can download them as follows: * via git-lfs: clone all the models in a repo where 'MODEL\_NAME' is any model under the 'aubmindlab' name * via 'wget': + Go to the tf1\_model.URL file on URL + copy the 'oid sha256' + then run 'wget URL (ex: for 'aragpt2-base': 'wget URL If you used this model please cite us as : ========================================== Google Scholar has our Bibtex wrong (missing name), use this instead Acknowledgments =============== Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the AUB MIND Lab Members for the continous support. Also thanks to Yakshof and Assafir for data and storage access. Another thanks for Habib Rahal (URL for putting a face to AraBERT. Contacts ======== Wissam Antoun: Linkedin | Twitter | Github | [wfa07@URL](mailto:wfa07@URL) | [URL@URL](mailto:URL@URL) Fady Baly: Linkedin | Twitter | Github | [fgb06@URL](mailto:fgb06@URL) | [URL@URL](mailto:URL@URL)
[]
[ "TAGS\n#transformers #pytorch #tf #jax #tensorboard #safetensors #bert #fill-mask #ar #dataset-wikipedia #dataset-Osian #dataset-1.5B-Arabic-Corpus #dataset-oscar-arabic-unshuffled #arxiv-2003.00104 #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
fill-mask
transformers
# AraBERT v1 & v2 : Pre-training BERT for Arabic Language Understanding <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/arabert_logo.png" width="100" align="left"/> **AraBERT** is an Arabic pretrained language model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERT uses the same BERT-Base config. More details are available in the [AraBERT Paper](https://arxiv.org/abs/2003.00104) and in the [AraBERT Meetup](https://github.com/WissamAntoun/pydata_khobar_meetup) There are two versions of the model, AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were split using the [Farasa Segmenter](http://alt.qcri.org/farasa/segmenter.html). We evaluate AraBERT models on different downstream tasks and compare them to [mBERT]((https://github.com/google-research/bert/blob/master/multilingual.md)), and other state of the art models (*To the extent of our knowledge*). The Tasks were Sentiment Analysis on 6 different datasets ([HARD](https://github.com/elnagara/HARD-Arabic-Dataset), [ASTD-Balanced](https://www.aclweb.org/anthology/D15-1299), [ArsenTD-Lev](https://staff.aub.edu.lb/~we07/Publications/ArSentD-LEV_Sentiment_Corpus.pdf), [LABR](https://github.com/mohamedadaly/LABR)), Named Entity Recognition with the [ANERcorp](http://curtis.ml.cmu.edu/w/courses/index.php/ANERcorp), and Arabic Question Answering on [Arabic-SQuAD and ARCD](https://github.com/husseinmozannar/SOQAL) # AraBERTv2 ## What's New! AraBERT now comes in 4 new variants to replace the old v1 versions: More Detail in the AraBERT folder and in the [README](https://github.com/aub-mind/arabert/blob/master/AraBERT/README.md) and in the [AraBERT Paper](https://arxiv.org/abs/2003.00104v2) Model | HuggingFace Model Name | Size (MB/Params)| Pre-Segmentation | DataSet (Sentences/Size/nWords) | ---|:---:|:---:|:---:|:---: AraBERTv0.2-base | [bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) | 543MB / 136M | No | 200M / 77GB / 8.6B | AraBERTv0.2-large| [bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) | 1.38G 371M | No | 200M / 77GB / 8.6B | AraBERTv2-base| [bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) | 543MB 136M | Yes | 200M / 77GB / 8.6B | AraBERTv2-large| [bert-large-arabertv2](https://huggingface.co/aubmindlab/bert-large-arabertv2) | 1.38G 371M | Yes | 200M / 77GB / 8.6B | AraBERTv0.2-Twitter-base| [bert-base-arabertv02-twitter](https://huggingface.co/aubmindlab/bert-base-arabertv02-twitter) | 543MB / 136M | No | Same as v02 + 60M Multi-Dialect Tweets| AraBERTv0.2-Twitter-large| [bert-large-arabertv02-twitter](https://huggingface.co/aubmindlab/bert-large-arabertv02-twitter) | 1.38G / 371M | No | Same as v02 + 60M Multi-Dialect Tweets| AraBERTv0.1-base| [bert-base-arabertv01](https://huggingface.co/aubmindlab/bert-base-arabertv01) | 543MB 136M | No | 77M / 23GB / 2.7B | AraBERTv1-base| [bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) | 543MB 136M | Yes | 77M / 23GB / 2.7B | All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats. ## Better Pre-Processing and New Vocab We identified an issue with AraBERTv1's wordpiece vocabulary. The issue came from punctuations and numbers that were still attached to words when learned the wordpiece vocab. We now insert a space between numbers and characters and around punctuation characters. The new vocabulary was learned using the `BertWordpieceTokenizer` from the `tokenizers` library, and should now support the Fast tokenizer implementation from the `transformers` library. **P.S.**: All the old BERT codes should work with the new BERT, just change the model name and check the new preprocessing function **Please read the section on how to use the [preprocessing function](#Preprocessing)** ## Bigger Dataset and More Compute We used ~3.5 times more data, and trained for longer. For Dataset Sources see the [Dataset Section](#Dataset) Model | Hardware | num of examples with seq len (128 / 512) |128 (Batch Size/ Num of Steps) | 512 (Batch Size/ Num of Steps) | Total Steps | Total Time (in Days) | ---|:---:|:---:|:---:|:---:|:---:|:---: AraBERTv0.2-base | TPUv3-8 | 420M / 207M | 2560 / 1M | 384/ 2M | 3M | - AraBERTv0.2-large | TPUv3-128 | 420M / 207M | 13440 / 250K | 2056 / 300K | 550K | 7 AraBERTv2-base | TPUv3-8 | 420M / 207M | 2560 / 1M | 384/ 2M | 3M | - AraBERTv2-large | TPUv3-128 | 520M / 245M | 13440 / 250K | 2056 / 300K | 550K | 7 AraBERT-base (v1/v0.1) | TPUv2-8 | - |512 / 900K | 128 / 300K| 1.2M | 4 # Dataset The pretraining data used for the new AraBERT model is also used for Arabic **GPT2 and ELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for providing us the data # Preprocessing It is recommended to apply our preprocessing function before training/testing on any dataset. **Install the arabert python package to segment text for AraBERT v1 & v2 or to clean your data `pip install arabert`** ```python from arabert.preprocess import ArabertPreprocessor model_name="aubmindlab/bert-large-arabertv2" arabert_prep = ArabertPreprocessor(model_name=model_name) text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري" arabert_prep.preprocess(text) >>>"و+ لن نبالغ إذا قل +نا إن هاتف أو كمبيوتر ال+ مكتب في زمن +نا هذا ضروري" ``` # TensorFlow 1.x models The TF1.x model are available in the HuggingFace models repo. You can download them as follows: - via git-lfs: clone all the models in a repo ```bash curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash sudo apt-get install git-lfs git lfs install git clone https://huggingface.co/aubmindlab/MODEL_NAME tar -C ./MODEL_NAME -zxvf /content/MODEL_NAME/tf1_model.tar.gz ``` where `MODEL_NAME` is any model under the `aubmindlab` name - via `wget`: - Go to the tf1_model.tar.gz file on huggingface.co/models/aubmindlab/MODEL_NAME. - copy the `oid sha256` - then run `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/INSERT_THE_SHA_HERE` (ex: for `aragpt2-base`: `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/3766fc03d7c2593ff2fb991d275e96b81b0ecb2098b71ff315611d052ce65248`) # If you used this model please cite us as : Google Scholar has our Bibtex wrong (missing name), use this instead ``` @inproceedings{antoun2020arabert, title={AraBERT: Transformer-based Model for Arabic Language Understanding}, author={Antoun, Wissam and Baly, Fady and Hajj, Hazem}, booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020}, pages={9} } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continuous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{"language": "ar", "datasets": ["wikipedia", "Osian", "1.5B-Arabic-Corpus", "oscar-arabic-unshuffled", "Assafir(private)"], "widget": [{"text": " \u0639\u0627\u0635\u0645 +\u0629 \u0644\u0628\u0646\u0627\u0646 \u0647\u064a [MASK] ."}]}
aubmindlab/bert-large-arabertv2
null
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "safetensors", "bert", "fill-mask", "ar", "arxiv:2003.00104", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2003.00104" ]
[ "ar" ]
TAGS #transformers #pytorch #tf #jax #tensorboard #safetensors #bert #fill-mask #ar #arxiv-2003.00104 #autotrain_compatible #endpoints_compatible #has_space #region-us
AraBERT v1 & v2 : Pre-training BERT for Arabic Language Understanding ===================================================================== <img src="URL width="100" align="left"/> AraBERT is an Arabic pretrained language model based on Google's BERT architechture. AraBERT uses the same BERT-Base config. More details are available in the AraBERT Paper and in the AraBERT Meetup There are two versions of the model, AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were split using the Farasa Segmenter. We evaluate AraBERT models on different downstream tasks and compare them to mBERT), and other state of the art models (*To the extent of our knowledge*). The Tasks were Sentiment Analysis on 6 different datasets (HARD, ASTD-Balanced, ArsenTD-Lev, LABR), Named Entity Recognition with the ANERcorp, and Arabic Question Answering on Arabic-SQuAD and ARCD AraBERTv2 ========= What's New! ----------- AraBERT now comes in 4 new variants to replace the old v1 versions: More Detail in the AraBERT folder and in the README and in the AraBERT Paper All models are available in the 'HuggingFace' model page under the aubmindlab name. Checkpoints are available in PyTorch, TF2 and TF1 formats. Better Pre-Processing and New Vocab ----------------------------------- We identified an issue with AraBERTv1's wordpiece vocabulary. The issue came from punctuations and numbers that were still attached to words when learned the wordpiece vocab. We now insert a space between numbers and characters and around punctuation characters. The new vocabulary was learned using the 'BertWordpieceTokenizer' from the 'tokenizers' library, and should now support the Fast tokenizer implementation from the 'transformers' library. P.S.: All the old BERT codes should work with the new BERT, just change the model name and check the new preprocessing function Please read the section on how to use the preprocessing function Bigger Dataset and More Compute ------------------------------- We used ~3.5 times more data, and trained for longer. For Dataset Sources see the Dataset Section Dataset ======= The pretraining data used for the new AraBERT model is also used for Arabic GPT2 and ELECTRA. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: * OSCAR unshuffled and filtered. * Arabic Wikipedia dump from 2020/09/01 * The 1.5B words Arabic Corpus * The OSIAN Corpus * Assafir news articles. Huge thank you for Assafir for providing us the data Preprocessing ============= It is recommended to apply our preprocessing function before training/testing on any dataset. Install the arabert python package to segment text for AraBERT v1 & v2 or to clean your data 'pip install arabert' TensorFlow 1.x models ===================== The TF1.x model are available in the HuggingFace models repo. You can download them as follows: * via git-lfs: clone all the models in a repo where 'MODEL\_NAME' is any model under the 'aubmindlab' name * via 'wget': + Go to the tf1\_model.URL file on URL + copy the 'oid sha256' + then run 'wget URL (ex: for 'aragpt2-base': 'wget URL If you used this model please cite us as : ========================================== Google Scholar has our Bibtex wrong (missing name), use this instead Acknowledgments =============== Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the AUB MIND Lab Members for the continuous support. Also thanks to Yakshof and Assafir for data and storage access. Another thanks for Habib Rahal (URL for putting a face to AraBERT. Contacts ======== Wissam Antoun: Linkedin | Twitter | Github | [wfa07@URL](mailto:wfa07@URL) | [URL@URL](mailto:URL@URL) Fady Baly: Linkedin | Twitter | Github | [fgb06@URL](mailto:fgb06@URL) | [URL@URL](mailto:URL@URL)
[]
[ "TAGS\n#transformers #pytorch #tf #jax #tensorboard #safetensors #bert #fill-mask #ar #arxiv-2003.00104 #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
text2text-generation
transformers
This folder contain a Google T5 Transformer Fine-tuned to generate paraphrases using: - Para_NMT_50M_Paraphrasing_train_small.csv 134337 lines of pair sentences 19Mbytes - Para_NMT_50M_Paraphrasing_val_small.csv 14928 lines of pair sentences 2.0Mbytes Training Start Time: Sun Mar 14 18:27:15 2021 Training End Time: Sun Mar 14 22:19:00 2021
{}
auday/paraphraser_model1
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
This folder contain a Google T5 Transformer Fine-tuned to generate paraphrases using: - Para_NMT_50M_Paraphrasing_train_small.csv 134337 lines of pair sentences 19Mbytes - Para_NMT_50M_Paraphrasing_val_small.csv 14928 lines of pair sentences 2.0Mbytes Training Start Time: Sun Mar 14 18:27:15 2021 Training End Time: Sun Mar 14 22:19:00 2021
[]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text2text-generation
transformers
This folder contain a Google T5 Transformer Fine-tuned to generate paraphrases using: - Quora_pair_train 134337 lines of pair sentences 14 Mbytes - Quora_pair_val 14928 lines of pair sentences 1.6 Mbytes training epoch: 6 Start Time: Sun Mar 14 18:27:15 2021 End Time: Sun Mar 14 22:19:00 2021
{}
auday/paraphraser_model2
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
This folder contain a Google T5 Transformer Fine-tuned to generate paraphrases using: - Quora_pair_train 134337 lines of pair sentences 14 Mbytes - Quora_pair_val 14928 lines of pair sentences 1.6 Mbytes training epoch: 6 Start Time: Sun Mar 14 18:27:15 2021 End Time: Sun Mar 14 22:19:00 2021
[]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
#Harry Potter DialoGPT Model
{"tags": ["conversational"]}
augustojaba/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#Harry Potter DialoGPT Model
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # augustoortiz/bert-finetuned-squad2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.2223 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11091, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Epoch | |:----------:|:-----:| | 1.2223 | 0 | ### Framework versions - Transformers 4.17.0.dev0 - TensorFlow 2.8.0 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "augustoortiz/bert-finetuned-squad2", "results": []}]}
augustoortiz/bert-finetuned-squad2
null
[ "transformers", "tf", "bert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #tf #bert #question-answering #generated_from_keras_callback #license-apache-2.0 #endpoints_compatible #region-us
augustoortiz/bert-finetuned-squad2 ================================== This model is a fine-tuned version of bert-base-cased on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 1.2223 * Epoch: 0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'AdamWeightDecay', 'learning\_rate': {'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_steps': 11091, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\_decay\_rate': 0.01} * training\_precision: mixed\_float16 ### Training results ### Framework versions * Transformers 4.17.0.dev0 * TensorFlow 2.8.0 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 11091, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: mixed\\_float16", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* TensorFlow 2.8.0\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #tf #bert #question-answering #generated_from_keras_callback #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 11091, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: mixed\\_float16", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* TensorFlow 2.8.0\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
fill-mask
transformers
# Austin MeDeBERTa This model was developed using further MLM pre-training on [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base), using a dataset of 1.1M clinical notes from the Austin Health EMR. The notes span discharge summaries, inpatient notes, radiology reports and histopathology reports. ## Model description This is the base version of the original DeBERTa model. The architecture and tokenizer are unchanged. ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 9 - eval_batch_size: 9 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 0.9756 | 0.51 | 40000 | 0.9127 | | 0.8876 | 1.01 | 80000 | 0.8221 | | 0.818 | 1.52 | 120000 | 0.7786 | | 0.7836 | 2.03 | 160000 | 0.7438 | | 0.7672 | 2.54 | 200000 | 0.7165 | | 0.734 | 3.04 | 240000 | 0.6948 | | 0.7079 | 3.55 | 280000 | 0.6749 | | 0.6987 | 4.06 | 320000 | 0.6598 | | 0.6771 | 4.57 | 360000 | 0.6471 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu113 - Datasets 1.15.1 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "deberta-pretrained-large", "results": []}]}
austin/Austin-MeDeBERTa
null
[ "transformers", "pytorch", "deberta", "fill-mask", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #deberta #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
Austin MeDeBERTa ================ This model was developed using further MLM pre-training on microsoft/deberta-base, using a dataset of 1.1M clinical notes from the Austin Health EMR. The notes span discharge summaries, inpatient notes, radiology reports and histopathology reports. Model description ----------------- This is the base version of the original DeBERTa model. The architecture and tokenizer are unchanged. Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 9 * eval\_batch\_size: 9 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.12.5 * Pytorch 1.10.0+cu113 * Datasets 1.15.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 9\n* eval\\_batch\\_size: 9\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu113\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #deberta #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 9\n* eval\\_batch\\_size: 9\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu113\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # adr-ner This model is a fine-tuned version of [austin/Austin-MeDeBERTa](https://huggingface.co/austin/Austin-MeDeBERTa) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0434 - Precision: 0.7305 - Recall: 0.6934 - F1: 0.7115 - Accuracy: 0.9941 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 107 | 0.0630 | 0.0 | 0.0 | 0.0 | 0.9876 | | No log | 2.0 | 214 | 0.0308 | 0.4282 | 0.3467 | 0.3832 | 0.9900 | | No log | 3.0 | 321 | 0.0254 | 0.5544 | 0.5603 | 0.5573 | 0.9920 | | No log | 4.0 | 428 | 0.0280 | 0.6430 | 0.5751 | 0.6071 | 0.9929 | | 0.0465 | 5.0 | 535 | 0.0266 | 0.5348 | 0.7146 | 0.6118 | 0.9915 | | 0.0465 | 6.0 | 642 | 0.0423 | 0.7632 | 0.5793 | 0.6587 | 0.9939 | | 0.0465 | 7.0 | 749 | 0.0336 | 0.6957 | 0.6765 | 0.6860 | 0.9939 | | 0.0465 | 8.0 | 856 | 0.0370 | 0.6876 | 0.6702 | 0.6788 | 0.9936 | | 0.0465 | 9.0 | 963 | 0.0349 | 0.6555 | 0.7040 | 0.6789 | 0.9932 | | 0.0044 | 10.0 | 1070 | 0.0403 | 0.6910 | 0.6808 | 0.6858 | 0.9938 | | 0.0044 | 11.0 | 1177 | 0.0415 | 0.7140 | 0.6808 | 0.6970 | 0.9939 | | 0.0044 | 12.0 | 1284 | 0.0440 | 0.7349 | 0.6681 | 0.6999 | 0.9941 | | 0.0044 | 13.0 | 1391 | 0.0423 | 0.7097 | 0.6977 | 0.7036 | 0.9941 | | 0.0044 | 14.0 | 1498 | 0.0435 | 0.7174 | 0.6977 | 0.7074 | 0.9941 | | 0.0006 | 15.0 | 1605 | 0.0434 | 0.7305 | 0.6934 | 0.7115 | 0.9941 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "adr-ner", "results": []}]}
austin/adr-ner
null
[ "transformers", "pytorch", "deberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #deberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
adr-ner ======= This model is a fine-tuned version of austin/Austin-MeDeBERTa on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.0434 * Precision: 0.7305 * Recall: 0.6934 * F1: 0.7115 * Accuracy: 0.9941 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 12 * eval\_batch\_size: 12 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 15 ### Training results ### Framework versions * Transformers 4.14.1 * Pytorch 1.10.0+cu113 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15", "### Training results", "### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.0+cu113\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #deberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15", "### Training results", "### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.0+cu113\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
null
null
# ReadMe 这是readme的文本内容
{"language": ["python"], "license": "mit", "tags": ["tag1", "tag2"], "datasets": ["dataset1", "dataset2"], "metrics": ["metric1", "metric2"], "thumbnail": "url to a thumbnail used in social sharing"}
avadesian/pg
null
[ "tag1", "tag2", "dataset:dataset1", "dataset:dataset2", "license:mit", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "python" ]
TAGS #tag1 #tag2 #dataset-dataset1 #dataset-dataset2 #license-mit #region-us
# ReadMe 这是readme的文本内容
[ "# ReadMe\n\n这是readme的文本内容" ]
[ "TAGS\n#tag1 #tag2 #dataset-dataset1 #dataset-dataset2 #license-mit #region-us \n", "# ReadMe\n\n这是readme的文本内容" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-donald_trump This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8721 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 391 | 2.8721 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "gpt2-donald_trump", "results": []}]}
aviator-neural/gpt2-donald_trump
null
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
gpt2-donald\_trump ================== This model is a fine-tuned version of gpt2 on the None dataset. It achieves the following results on the evaluation set: * Loss: 2.8721 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.18.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart_jokes This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0282 ## Model description This model is trained of jokes dataset , where you can ask a question and the model gives funny answer. ## Intended uses & limitations ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.3455 | 1.0 | 1914 | 3.0282 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "mbart_jokes", "results": []}]}
aviator-neural/mbart_jokes
null
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
mbart\_jokes ============ This model is a fine-tuned version of facebook/bart-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 3.0282 Model description ----------------- This model is trained of jokes dataset , where you can ask a question and the model gives funny answer. Intended uses & limitations --------------------------- Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.12.5 * Pytorch 1.9.1 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.9.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.9.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
fill-mask
transformers
## HeBERT: Pre-trained BERT for Polarity Analysis and Emotion Recognition HeBERT is a Hebrew pretrained language model. It is based on Google's BERT architecture and it is BERT-Base config [(Devlin et al. 2018)](https://arxiv.org/abs/1810.04805). <br> ### HeBert was trained on three dataset: 1. A Hebrew version of OSCAR [(Ortiz, 2019)](https://oscar-corpus.com/): ~9.8 GB of data, including 1 billion words and over 20.8 millions sentences. 2. A Hebrew dump of [Wikipedia](https://dumps.wikimedia.org/hewiki/latest/): ~650 MB of data, including over 63 millions words and 3.8 millions sentences 3. Emotion UGC data that was collected for the purpose of this study. (described below) We evaluated the model on emotion recognition and sentiment analysis, for a downstream tasks. ### Emotion UGC Data Description Our User Genrated Content (UGC) is comments written on articles collected from 3 major news sites, between January 2020 to August 2020,. Total data size ~150 MB of data, including over 7 millions words and 350K sentences. 4000 sentences annotated by crowd members (3-10 annotators per sentence) for 8 emotions (anger, disgust, expectation , fear, happy, sadness, surprise and trust) and overall sentiment / polarity<br> In order to valid the annotation, we search an agreement between raters to emotion in each sentence using krippendorff's alpha [(krippendorff, 1970)](https://journals.sagepub.com/doi/pdf/10.1177/001316447003000105). We left sentences that got alpha > 0.7. Note that while we found a general agreement between raters about emotion like happy, trust and disgust, there are few emotion with general disagreement about them, apparently given the complexity of finding them in the text (e.g. expectation and surprise). ## How to use ### For masked-LM model (can be fine-tunned to any down-stream task) ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT") model = AutoModel.from_pretrained("avichr/heBERT") from transformers import pipeline fill_mask = pipeline( "fill-mask", model="avichr/heBERT", tokenizer="avichr/heBERT" ) fill_mask("הקורונה לקחה את [MASK] ולנו לא נשאר דבר.") ``` ### For sentiment classification model (polarity ONLY): ``` from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) >>> sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') [[{'label': 'natural', 'score': 0.9978172183036804}, {'label': 'positive', 'score': 0.0014792329166084528}, {'label': 'negative', 'score': 0.0007035882445052266}]] >>> sentiment_analysis('קפה זה טעים') [[{'label': 'natural', 'score': 0.00047328314394690096}, {'label': 'possitive', 'score': 0.9994067549705505}, {'label': 'negetive', 'score': 0.00011996887042187154}]] >>> sentiment_analysis('אני לא אוהב את העולם') [[{'label': 'natural', 'score': 9.214012970915064e-05}, {'label': 'possitive', 'score': 8.876807987689972e-05}, {'label': 'negetive', 'score': 0.9998190999031067}]] ``` Our model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda) ### For NER model: ``` from transformers import pipeline # how to use? NER = pipeline( "token-classification", model="avichr/heBERT_NER", tokenizer="avichr/heBERT_NER", ) NER('דויד לומד באוניברסיטה העברית שבירושלים') ``` ## Stay tuned! We are still working on our model and will edit this page as we progress.<br> Note that we have released only sentiment analysis (polarity) at this point, emotion detection will be released later on.<br> our git: https://github.com/avichaychriqui/HeBERT ## If you use this model please cite us as : Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming. ``` @article{chriqui2021hebert, title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={INFORMS Journal on Data Science}, year={2022} } ```
{}
avichr/heBERT
null
[ "transformers", "pytorch", "jax", "bert", "fill-mask", "arxiv:1810.04805", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1810.04805" ]
[]
TAGS #transformers #pytorch #jax #bert #fill-mask #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #has_space #region-us
## HeBERT: Pre-trained BERT for Polarity Analysis and Emotion Recognition HeBERT is a Hebrew pretrained language model. It is based on Google's BERT architecture and it is BERT-Base config (Devlin et al. 2018). <br> ### HeBert was trained on three dataset: 1. A Hebrew version of OSCAR (Ortiz, 2019): ~9.8 GB of data, including 1 billion words and over 20.8 millions sentences. 2. A Hebrew dump of Wikipedia: ~650 MB of data, including over 63 millions words and 3.8 millions sentences 3. Emotion UGC data that was collected for the purpose of this study. (described below) We evaluated the model on emotion recognition and sentiment analysis, for a downstream tasks. ### Emotion UGC Data Description Our User Genrated Content (UGC) is comments written on articles collected from 3 major news sites, between January 2020 to August 2020,. Total data size ~150 MB of data, including over 7 millions words and 350K sentences. 4000 sentences annotated by crowd members (3-10 annotators per sentence) for 8 emotions (anger, disgust, expectation , fear, happy, sadness, surprise and trust) and overall sentiment / polarity<br> In order to valid the annotation, we search an agreement between raters to emotion in each sentence using krippendorff's alpha (krippendorff, 1970). We left sentences that got alpha > 0.7. Note that while we found a general agreement between raters about emotion like happy, trust and disgust, there are few emotion with general disagreement about them, apparently given the complexity of finding them in the text (e.g. expectation and surprise). ## How to use ### For masked-LM model (can be fine-tunned to any down-stream task) ### For sentiment classification model (polarity ONLY): Our model is also available on AWS! for more information visit AWS' git ### For NER model: ## Stay tuned! We are still working on our model and will edit this page as we progress.<br> Note that we have released only sentiment analysis (polarity) at this point, emotion detection will be released later on.<br> our git: URL ## If you use this model please cite us as : Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming.
[ "## HeBERT: Pre-trained BERT for Polarity Analysis and Emotion Recognition\nHeBERT is a Hebrew pretrained language model. It is based on Google's BERT architecture and it is BERT-Base config (Devlin et al. 2018). <br>", "### HeBert was trained on three dataset: \n1. A Hebrew version of OSCAR (Ortiz, 2019): ~9.8 GB of data, including 1 billion words and over 20.8 millions sentences. \n2. A Hebrew dump of Wikipedia: ~650 MB of data, including over 63 millions words and 3.8 millions sentences\n3. Emotion UGC data that was collected for the purpose of this study. (described below)\nWe evaluated the model on emotion recognition and sentiment analysis, for a downstream tasks.", "### Emotion UGC Data Description\nOur User Genrated Content (UGC) is comments written on articles collected from 3 major news sites, between January 2020 to August 2020,. Total data size ~150 MB of data, including over 7 millions words and 350K sentences.\n4000 sentences annotated by crowd members (3-10 annotators per sentence) for 8 emotions (anger, disgust, expectation , fear, happy, sadness, surprise and trust) and overall sentiment / polarity<br>\nIn order to valid the annotation, we search an agreement between raters to emotion in each sentence using krippendorff's alpha (krippendorff, 1970). We left sentences that got alpha > 0.7. Note that while we found a general agreement between raters about emotion like happy, trust and disgust, there are few emotion with general disagreement about them, apparently given the complexity of finding them in the text (e.g. expectation and surprise).", "## How to use", "### For masked-LM model (can be fine-tunned to any down-stream task)", "### For sentiment classification model (polarity ONLY):\n\nOur model is also available on AWS! for more information visit AWS' git", "### For NER model:", "## Stay tuned!\nWe are still working on our model and will edit this page as we progress.<br>\nNote that we have released only sentiment analysis (polarity) at this point, emotion detection will be released later on.<br>\nour git: URL", "## If you use this model please cite us as :\nChriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming." ]
[ "TAGS\n#transformers #pytorch #jax #bert #fill-mask #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "## HeBERT: Pre-trained BERT for Polarity Analysis and Emotion Recognition\nHeBERT is a Hebrew pretrained language model. It is based on Google's BERT architecture and it is BERT-Base config (Devlin et al. 2018). <br>", "### HeBert was trained on three dataset: \n1. A Hebrew version of OSCAR (Ortiz, 2019): ~9.8 GB of data, including 1 billion words and over 20.8 millions sentences. \n2. A Hebrew dump of Wikipedia: ~650 MB of data, including over 63 millions words and 3.8 millions sentences\n3. Emotion UGC data that was collected for the purpose of this study. (described below)\nWe evaluated the model on emotion recognition and sentiment analysis, for a downstream tasks.", "### Emotion UGC Data Description\nOur User Genrated Content (UGC) is comments written on articles collected from 3 major news sites, between January 2020 to August 2020,. Total data size ~150 MB of data, including over 7 millions words and 350K sentences.\n4000 sentences annotated by crowd members (3-10 annotators per sentence) for 8 emotions (anger, disgust, expectation , fear, happy, sadness, surprise and trust) and overall sentiment / polarity<br>\nIn order to valid the annotation, we search an agreement between raters to emotion in each sentence using krippendorff's alpha (krippendorff, 1970). We left sentences that got alpha > 0.7. Note that while we found a general agreement between raters about emotion like happy, trust and disgust, there are few emotion with general disagreement about them, apparently given the complexity of finding them in the text (e.g. expectation and surprise).", "## How to use", "### For masked-LM model (can be fine-tunned to any down-stream task)", "### For sentiment classification model (polarity ONLY):\n\nOur model is also available on AWS! for more information visit AWS' git", "### For NER model:", "## Stay tuned!\nWe are still working on our model and will edit this page as we progress.<br>\nNote that we have released only sentiment analysis (polarity) at this point, emotion detection will be released later on.<br>\nour git: URL", "## If you use this model please cite us as :\nChriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming." ]
token-classification
transformers
# HeBERT: Pre-trained BERT for Polarity Analysis and Emotion Recognition <img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250"> HeBERT is a Hebrew pretrained language model. It is based on [Google's BERT](https://arxiv.org/abs/1810.04805) architecture and it is BERT-Base config. <br> HeBert was trained on three dataset: 1. A Hebrew version of [OSCAR](https://oscar-corpus.com/): ~9.8 GB of data, including 1 billion words and over 20.8 millions sentences. 2. A Hebrew dump of [Wikipedia](https://dumps.wikimedia.org/): ~650 MB of data, including over 63 millions words and 3.8 millions sentences 3. Emotion User Generated Content (UGC) data that was collected for the purpose of this study (described below). ## Named-entity recognition (NER) The ability of the model to classify named entities in text, such as persons' names, organizations, and locations; tested on a labeled dataset from [Ben Mordecai and M Elhadad (2005)](https://www.cs.bgu.ac.il/~elhadad/nlpproj/naama/), and evaluated with F1-score. ### How to use ``` from transformers import pipeline # how to use? NER = pipeline( "token-classification", model="avichr/heBERT_NER", tokenizer="avichr/heBERT_NER", ) NER('דויד לומד באוניברסיטה העברית שבירושלים') ``` ## Other tasks [**Emotion Recognition Model**](https://huggingface.co/avichr/hebEMO_trust). An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing) <br> [**Sentiment Analysis**](https://huggingface.co/avichr/heBERT_sentiment_analysis). <br> [**masked-LM model**](https://huggingface.co/avichr/heBERT) (can be fine-tunned to any down-stream task). ## Contact us [Avichay Chriqui](mailto:[email protected]) <br> [Inbal yahav](mailto:[email protected]) <br> The Coller Semitic Languages AI Lab <br> Thank you, תודה, شكرا <br> ## If you used this model please cite us as : Chriqui, A., & Yahav, I. (2021). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. arXiv preprint arXiv:2102.01909. ``` @article{chriqui2021hebert, title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={arXiv preprint arXiv:2102.01909}, year={2021} } ``` [git](https://github.com/avichaychriqui/HeBERT)
{}
avichr/heBERT_NER
null
[ "transformers", "pytorch", "bert", "token-classification", "arxiv:1810.04805", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1810.04805" ]
[]
TAGS #transformers #pytorch #bert #token-classification #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #has_space #region-us
# HeBERT: Pre-trained BERT for Polarity Analysis and Emotion Recognition <img align="right" src="URL width="250"> HeBERT is a Hebrew pretrained language model. It is based on Google's BERT architecture and it is BERT-Base config. <br> HeBert was trained on three dataset: 1. A Hebrew version of OSCAR: ~9.8 GB of data, including 1 billion words and over 20.8 millions sentences. 2. A Hebrew dump of Wikipedia: ~650 MB of data, including over 63 millions words and 3.8 millions sentences 3. Emotion User Generated Content (UGC) data that was collected for the purpose of this study (described below). ## Named-entity recognition (NER) The ability of the model to classify named entities in text, such as persons' names, organizations, and locations; tested on a labeled dataset from Ben Mordecai and M Elhadad (2005), and evaluated with F1-score. ### How to use ## Other tasks Emotion Recognition Model. An online model can be found at huggingface spaces or as colab notebook <br> Sentiment Analysis. <br> masked-LM model (can be fine-tunned to any down-stream task). ## Contact us Avichay Chriqui <br> Inbal yahav <br> The Coller Semitic Languages AI Lab <br> Thank you, תודה, شكرا <br> ## If you used this model please cite us as : Chriqui, A., & Yahav, I. (2021). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. arXiv preprint arXiv:2102.01909. git
[ "# HeBERT: Pre-trained BERT for Polarity Analysis and Emotion Recognition\n<img align=\"right\" src=\"URL width=\"250\">\n\nHeBERT is a Hebrew pretrained language model. It is based on Google's BERT architecture and it is BERT-Base config. <br>\n\nHeBert was trained on three dataset: \n1. A Hebrew version of OSCAR: ~9.8 GB of data, including 1 billion words and over 20.8 millions sentences. \n2. A Hebrew dump of Wikipedia: ~650 MB of data, including over 63 millions words and 3.8 millions sentences\n3. Emotion User Generated Content (UGC) data that was collected for the purpose of this study (described below).", "## Named-entity recognition (NER)\nThe ability of the model to classify named entities in text, such as persons' names, organizations, and locations; tested on a labeled dataset from Ben Mordecai and M Elhadad (2005), and evaluated with F1-score.", "### How to use", "## Other tasks\nEmotion Recognition Model.\nAn online model can be found at huggingface spaces or as colab notebook\n<br>\nSentiment Analysis.\n<br>\nmasked-LM model (can be fine-tunned to any down-stream task).", "## Contact us\nAvichay Chriqui <br>\nInbal yahav <br>\nThe Coller Semitic Languages AI Lab <br>\nThank you, תודה, شكرا <br>", "## If you used this model please cite us as :\nChriqui, A., & Yahav, I. (2021). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. arXiv preprint arXiv:2102.01909.\n\ngit" ]
[ "TAGS\n#transformers #pytorch #bert #token-classification #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# HeBERT: Pre-trained BERT for Polarity Analysis and Emotion Recognition\n<img align=\"right\" src=\"URL width=\"250\">\n\nHeBERT is a Hebrew pretrained language model. It is based on Google's BERT architecture and it is BERT-Base config. <br>\n\nHeBert was trained on three dataset: \n1. A Hebrew version of OSCAR: ~9.8 GB of data, including 1 billion words and over 20.8 millions sentences. \n2. A Hebrew dump of Wikipedia: ~650 MB of data, including over 63 millions words and 3.8 millions sentences\n3. Emotion User Generated Content (UGC) data that was collected for the purpose of this study (described below).", "## Named-entity recognition (NER)\nThe ability of the model to classify named entities in text, such as persons' names, organizations, and locations; tested on a labeled dataset from Ben Mordecai and M Elhadad (2005), and evaluated with F1-score.", "### How to use", "## Other tasks\nEmotion Recognition Model.\nAn online model can be found at huggingface spaces or as colab notebook\n<br>\nSentiment Analysis.\n<br>\nmasked-LM model (can be fine-tunned to any down-stream task).", "## Contact us\nAvichay Chriqui <br>\nInbal yahav <br>\nThe Coller Semitic Languages AI Lab <br>\nThank you, תודה, شكرا <br>", "## If you used this model please cite us as :\nChriqui, A., & Yahav, I. (2021). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. arXiv preprint arXiv:2102.01909.\n\ngit" ]
text-classification
transformers
## HeBERT: Pre-trained BERT for Polarity Analysis and Emotion Recognition HeBERT is a Hebrew pre-trained language model. It is based on Google's BERT architecture and it is BERT-Base config [(Devlin et al. 2018)](https://arxiv.org/abs/1810.04805). <br> HeBert was trained on three datasets: 1. A Hebrew version of OSCAR [(Ortiz, 2019)](https://oscar-corpus.com/): ~9.8 GB of data, including 1 billion words and over 20.8 million sentences. 2. A Hebrew dump of Wikipedia: ~650 MB of data, including over 63 million words and 3.8 million sentences 3. Emotion UGC data was collected for the purpose of this study. (described below) We evaluated the model on emotion recognition and sentiment analysis, for downstream tasks. ### Emotion UGC Data Description Our User-Generated Content (UGC) is comments written on articles collected from 3 major news sites, between January 2020 to August 2020, Total data size of ~150 MB of data, including over 7 million words and 350K sentences. 4000 sentences annotated by crowd members (3-10 annotators per sentence) for 8 emotions (anger, disgust, expectation, fear, happy, sadness, surprise, and trust) and overall sentiment/polarity <br> In order to validate the annotation, we search for an agreement between raters to emotion in each sentence using Krippendorff's alpha [(krippendorff, 1970)](https://journals.sagepub.com/doi/pdf/10.1177/001316447003000105). We left sentences that got alpha > 0.7. Note that while we found a general agreement between raters about emotions like happiness, trust, and disgust, there are few emotions with general disagreement about them, apparently given the complexity of finding them in the text (e.g. expectation and surprise). ### Performance #### sentiment analysis | | precision | recall | f1-score | |--------------|-----------|--------|----------| | natural | 0.83 | 0.56 | 0.67 | | positive | 0.96 | 0.92 | 0.94 | | negative | 0.97 | 0.99 | 0.98 | | accuracy | | | 0.97 | | macro avg | 0.92 | 0.82 | 0.86 | | weighted avg | 0.96 | 0.97 | 0.96 | ## How to use ### For masked-LM model (can be fine-tunned to any down-stream task) ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT") model = AutoModel.from_pretrained("avichr/heBERT") from transformers import pipeline fill_mask = pipeline( "fill-mask", model="avichr/heBERT", tokenizer="avichr/heBERT" ) fill_mask("הקורונה לקחה את [MASK] ולנו לא נשאר דבר.") ``` ### For sentiment classification model (polarity ONLY): ``` from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) >>> sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') [[{'label': 'natural', 'score': 0.9978172183036804}, {'label': 'positive', 'score': 0.0014792329166084528}, {'label': 'negative', 'score': 0.0007035882445052266}]] >>> sentiment_analysis('קפה זה טעים') [[{'label': 'natural', 'score': 0.00047328314394690096}, {'label': 'possitive', 'score': 0.9994067549705505}, {'label': 'negetive', 'score': 0.00011996887042187154}]] >>> sentiment_analysis('אני לא אוהב את העולם') [[{'label': 'natural', 'score': 9.214012970915064e-05}, {'label': 'possitive', 'score': 8.876807987689972e-05}, {'label': 'negetive', 'score': 0.9998190999031067}]] ``` Our model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda) ## Stay tuned! We are still working on our model and will edit this page as we progress.<br> Note that we have released only sentiment analysis (polarity) at this point, emotion detection will be released later on.<br> our git: https://github.com/avichaychriqui/HeBERT ## If you used this model please cite us as : Chriqui, A., & Yahav, I. (2021). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. arXiv preprint arXiv:2102.01909. ``` @article{chriqui2021hebert, title={HeBERT \\\\\\\\\\\\\\\\& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={arXiv preprint arXiv:2102.01909}, year={2021} } ```
{}
avichr/heBERT_sentiment_analysis
null
[ "transformers", "pytorch", "jax", "bert", "text-classification", "arxiv:1810.04805", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1810.04805" ]
[]
TAGS #transformers #pytorch #jax #bert #text-classification #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #has_space #region-us
HeBERT: Pre-trained BERT for Polarity Analysis and Emotion Recognition ---------------------------------------------------------------------- HeBERT is a Hebrew pre-trained language model. It is based on Google's BERT architecture and it is BERT-Base config (Devlin et al. 2018). HeBert was trained on three datasets: 1. A Hebrew version of OSCAR (Ortiz, 2019): ~9.8 GB of data, including 1 billion words and over 20.8 million sentences. 2. A Hebrew dump of Wikipedia: ~650 MB of data, including over 63 million words and 3.8 million sentences 3. Emotion UGC data was collected for the purpose of this study. (described below) We evaluated the model on emotion recognition and sentiment analysis, for downstream tasks. ### Emotion UGC Data Description Our User-Generated Content (UGC) is comments written on articles collected from 3 major news sites, between January 2020 to August 2020, Total data size of ~150 MB of data, including over 7 million words and 350K sentences. 4000 sentences annotated by crowd members (3-10 annotators per sentence) for 8 emotions (anger, disgust, expectation, fear, happy, sadness, surprise, and trust) and overall sentiment/polarity In order to validate the annotation, we search for an agreement between raters to emotion in each sentence using Krippendorff's alpha (krippendorff, 1970). We left sentences that got alpha > 0.7. Note that while we found a general agreement between raters about emotions like happiness, trust, and disgust, there are few emotions with general disagreement about them, apparently given the complexity of finding them in the text (e.g. expectation and surprise). ### Performance #### sentiment analysis How to use ---------- ### For masked-LM model (can be fine-tunned to any down-stream task) ### For sentiment classification model (polarity ONLY): Our model is also available on AWS! for more information visit AWS' git Stay tuned! ----------- We are still working on our model and will edit this page as we progress. Note that we have released only sentiment analysis (polarity) at this point, emotion detection will be released later on. our git: URL If you used this model please cite us as : ------------------------------------------ Chriqui, A., & Yahav, I. (2021). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. arXiv preprint arXiv:2102.01909.
[ "### Emotion UGC Data Description\n\n\nOur User-Generated Content (UGC) is comments written on articles collected from 3 major news sites, between January 2020 to August 2020, Total data size of ~150 MB of data, including over 7 million words and 350K sentences.\n4000 sentences annotated by crowd members (3-10 annotators per sentence) for 8 emotions (anger, disgust, expectation, fear, happy, sadness, surprise, and trust) and overall sentiment/polarity \n\nIn order to validate the annotation, we search for an agreement between raters to emotion in each sentence using Krippendorff's alpha (krippendorff, 1970). We left sentences that got alpha > 0.7. Note that while we found a general agreement between raters about emotions like happiness, trust, and disgust, there are few emotions with general disagreement about them, apparently given the complexity of finding them in the text (e.g. expectation and surprise).", "### Performance", "#### sentiment analysis\n\n\n\nHow to use\n----------", "### For masked-LM model (can be fine-tunned to any down-stream task)", "### For sentiment classification model (polarity ONLY):\n\n\nOur model is also available on AWS! for more information visit AWS' git\n\n\nStay tuned!\n-----------\n\n\nWe are still working on our model and will edit this page as we progress. \n\nNote that we have released only sentiment analysis (polarity) at this point, emotion detection will be released later on. \n\nour git: URL\n\n\nIf you used this model please cite us as :\n------------------------------------------\n\n\nChriqui, A., & Yahav, I. (2021). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. arXiv preprint arXiv:2102.01909." ]
[ "TAGS\n#transformers #pytorch #jax #bert #text-classification #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### Emotion UGC Data Description\n\n\nOur User-Generated Content (UGC) is comments written on articles collected from 3 major news sites, between January 2020 to August 2020, Total data size of ~150 MB of data, including over 7 million words and 350K sentences.\n4000 sentences annotated by crowd members (3-10 annotators per sentence) for 8 emotions (anger, disgust, expectation, fear, happy, sadness, surprise, and trust) and overall sentiment/polarity \n\nIn order to validate the annotation, we search for an agreement between raters to emotion in each sentence using Krippendorff's alpha (krippendorff, 1970). We left sentences that got alpha > 0.7. Note that while we found a general agreement between raters about emotions like happiness, trust, and disgust, there are few emotions with general disagreement about them, apparently given the complexity of finding them in the text (e.g. expectation and surprise).", "### Performance", "#### sentiment analysis\n\n\n\nHow to use\n----------", "### For masked-LM model (can be fine-tunned to any down-stream task)", "### For sentiment classification model (polarity ONLY):\n\n\nOur model is also available on AWS! for more information visit AWS' git\n\n\nStay tuned!\n-----------\n\n\nWe are still working on our model and will edit this page as we progress. \n\nNote that we have released only sentiment analysis (polarity) at this point, emotion detection will be released later on. \n\nour git: URL\n\n\nIf you used this model please cite us as :\n------------------------------------------\n\n\nChriqui, A., & Yahav, I. (2021). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. arXiv preprint arXiv:2102.01909." ]
text-classification
transformers
# HebEMO - Emotion Recognition Model for Modern Hebrew <img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250"> HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language. ## Emotion UGC Data Description Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences. ~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust. The percentage of sentences in which each emotion appeared is found in the table below. | | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment | |------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------| | **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 | ## Performance ### Emotion Recognition | emotion | f1-score | precision | recall | |-------------|----------|-----------|----------| | anger | 0.96 | 0.99 | 0.93 | | disgust | 0.97 | 0.98 | 0.96 | |anticipation | 0.82 | 0.80 | 0.87 | | fear | 0.79 | 0.88 | 0.72 | | joy | 0.90 | 0.97 | 0.84 | | sadness | 0.90 | 0.86 | 0.94 | | surprise | 0.40 | 0.44 | 0.37 | | trust | 0.83 | 0.86 | 0.80 | *The above metrics is for positive class (meaning, the emotion is reflected in the text).* ### Sentiment (Polarity) Analysis | | precision | recall | f1-score | |--------------|-----------|--------|----------| | neutral | 0.83 | 0.56 | 0.67 | | positive | 0.96 | 0.92 | 0.94 | | negative | 0.97 | 0.99 | 0.98 | | accuracy | | | 0.97 | | macro avg | 0.92 | 0.82 | 0.86 | | weighted avg | 0.96 | 0.97 | 0.96 | *Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)* ## How to use ### Emotion Recognition Model An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing) ``` # !pip install pyplutchik==0.0.7 # !pip install transformers==4.14.1 !git clone https://github.com/avichaychriqui/HeBERT.git from HeBERT.src.HebEMO import * HebEMO_model = HebEMO() HebEMO_model.hebemo(input_path = 'data/text_example.txt') # return analyzed pandas.DataFrame hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True) ``` <img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" /> ### For sentiment classification model (polarity ONLY): from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') >>> [[{'label': 'neutral', 'score': 0.9978172183036804}, >>> {'label': 'positive', 'score': 0.0014792329166084528}, >>> {'label': 'negative', 'score': 0.0007035882445052266}]] sentiment_analysis('קפה זה טעים') >>> [[{'label': 'neutral', 'score': 0.00047328314394690096}, >>> {'label': 'possitive', 'score': 0.9994067549705505}, >>> {'label': 'negetive', 'score': 0.00011996887042187154}]] sentiment_analysis('אני לא אוהב את העולם') >>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, >>> {'label': 'possitive', 'score': 8.876807987689972e-05}, >>> {'label': 'negetive', 'score': 0.9998190999031067}]] ## Contact us [Avichay Chriqui](mailto:[email protected]) <br> [Inbal yahav](mailto:[email protected]) <br> The Coller Semitic Languages AI Lab <br> Thank you, תודה, شكرا <br> ## If you used this model please cite us as : Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming. ``` @article{chriqui2021hebert, title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={INFORMS Journal on Data Science}, year={2022} } ```
{}
avichr/hebEMO_anger
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
HebEMO - Emotion Recognition Model for Modern Hebrew ==================================================== <img align="right" src="URL width="250"> HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language. Emotion UGC Data Description ---------------------------- Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences. ~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and eight emotions: anger, disgust, anticipation , fear, joy, sadness, surprise and trust. The percentage of sentences in which each emotion appeared is found in the table below. Performance ----------- ### Emotion Recognition *The above metrics is for positive class (meaning, the emotion is reflected in the text).* ### Sentiment (Polarity) Analysis *Sentiment (polarity) analysis model is also available on AWS! for more information visit AWS' git* How to use ---------- ### Emotion Recognition Model An online model can be found at huggingface spaces or as colab notebook <img src="URL width="300" height="300" /> ### For sentiment classification model (polarity ONLY): ``` from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') >>> [[{'label': 'neutral', 'score': 0.9978172183036804}, >>> {'label': 'positive', 'score': 0.0014792329166084528}, >>> {'label': 'negative', 'score': 0.0007035882445052266}]] sentiment_analysis('קפה זה טעים') >>> [[{'label': 'neutral', 'score': 0.00047328314394690096}, >>> {'label': 'possitive', 'score': 0.9994067549705505}, >>> {'label': 'negetive', 'score': 0.00011996887042187154}]] sentiment_analysis('אני לא אוהב את העולם') >>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, >>> {'label': 'possitive', 'score': 8.876807987689972e-05}, >>> {'label': 'negetive', 'score': 0.9998190999031067}]] ``` Contact us ---------- Avichay Chriqui Inbal yahav The Coller Semitic Languages AI Lab Thank you, תודה, شكرا If you used this model please cite us as : ------------------------------------------ Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming.
[ "### Emotion Recognition\n\n\n\n*The above metrics is for positive class (meaning, the emotion is reflected in the text).*", "### Sentiment (Polarity) Analysis\n\n\n\n*Sentiment (polarity) analysis model is also available on AWS! for more information visit AWS' git*\n\n\nHow to use\n----------", "### Emotion Recognition Model\n\n\nAn online model can be found at huggingface spaces or as colab notebook\n\n\n<img src=\"URL width=\"300\" height=\"300\" />", "### For sentiment classification model (polarity ONLY):\n\n\n\n```\nfrom transformers import AutoTokenizer, AutoModel, pipeline\n\ntokenizer = AutoTokenizer.from_pretrained(\"avichr/heBERT_sentiment_analysis\") #same as 'avichr/heBERT' tokenizer\nmodel = AutoModel.from_pretrained(\"avichr/heBERT_sentiment_analysis\")", "# how to use?\nsentiment_analysis = pipeline(\n \"sentiment-analysis\",\n model=\"avichr/heBERT_sentiment_analysis\",\n tokenizer=\"avichr/heBERT_sentiment_analysis\",\n return_all_scores = True\n)\n\nsentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')\t\n>>> [[{'label': 'neutral', 'score': 0.9978172183036804},\n>>> {'label': 'positive', 'score': 0.0014792329166084528},\n>>> {'label': 'negative', 'score': 0.0007035882445052266}]]\n\nsentiment_analysis('קפה זה טעים')\n>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},\n>>> {'label': 'possitive', 'score': 0.9994067549705505},\n>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]\n\nsentiment_analysis('אני לא אוהב את העולם')\n>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, \n>>> {'label': 'possitive', 'score': 8.876807987689972e-05}, \n>>> {'label': 'negetive', 'score': 0.9998190999031067}]]\n\n```\n\nContact us\n----------\n\n\nAvichay Chriqui \n\nInbal yahav \n\nThe Coller Semitic Languages AI Lab \n\nThank you, תודה, شكرا \n\n\n\nIf you used this model please cite us as :\n------------------------------------------\n\n\nChriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming." ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n", "### Emotion Recognition\n\n\n\n*The above metrics is for positive class (meaning, the emotion is reflected in the text).*", "### Sentiment (Polarity) Analysis\n\n\n\n*Sentiment (polarity) analysis model is also available on AWS! for more information visit AWS' git*\n\n\nHow to use\n----------", "### Emotion Recognition Model\n\n\nAn online model can be found at huggingface spaces or as colab notebook\n\n\n<img src=\"URL width=\"300\" height=\"300\" />", "### For sentiment classification model (polarity ONLY):\n\n\n\n```\nfrom transformers import AutoTokenizer, AutoModel, pipeline\n\ntokenizer = AutoTokenizer.from_pretrained(\"avichr/heBERT_sentiment_analysis\") #same as 'avichr/heBERT' tokenizer\nmodel = AutoModel.from_pretrained(\"avichr/heBERT_sentiment_analysis\")", "# how to use?\nsentiment_analysis = pipeline(\n \"sentiment-analysis\",\n model=\"avichr/heBERT_sentiment_analysis\",\n tokenizer=\"avichr/heBERT_sentiment_analysis\",\n return_all_scores = True\n)\n\nsentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')\t\n>>> [[{'label': 'neutral', 'score': 0.9978172183036804},\n>>> {'label': 'positive', 'score': 0.0014792329166084528},\n>>> {'label': 'negative', 'score': 0.0007035882445052266}]]\n\nsentiment_analysis('קפה זה טעים')\n>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},\n>>> {'label': 'possitive', 'score': 0.9994067549705505},\n>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]\n\nsentiment_analysis('אני לא אוהב את העולם')\n>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, \n>>> {'label': 'possitive', 'score': 8.876807987689972e-05}, \n>>> {'label': 'negetive', 'score': 0.9998190999031067}]]\n\n```\n\nContact us\n----------\n\n\nAvichay Chriqui \n\nInbal yahav \n\nThe Coller Semitic Languages AI Lab \n\nThank you, תודה, شكرا \n\n\n\nIf you used this model please cite us as :\n------------------------------------------\n\n\nChriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming." ]
text-classification
transformers
# HebEMO - Emotion Recognition Model for Modern Hebrew <img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250"> HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language. ## Emotion UGC Data Description Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences. ~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust. The percentage of sentences in which each emotion appeared is found in the table below. | | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment | |------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------| | **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 | ## Performance ### Emotion Recognition | emotion | f1-score | precision | recall | |-------------|----------|-----------|----------| | anger | 0.96 | 0.99 | 0.93 | | disgust | 0.97 | 0.98 | 0.96 | |anticipation | 0.82 | 0.80 | 0.87 | | fear | 0.79 | 0.88 | 0.72 | | joy | 0.90 | 0.97 | 0.84 | | sadness | 0.90 | 0.86 | 0.94 | | surprise | 0.40 | 0.44 | 0.37 | | trust | 0.83 | 0.86 | 0.80 | *The above metrics is for positive class (meaning, the emotion is reflected in the text).* ### Sentiment (Polarity) Analysis | | precision | recall | f1-score | |--------------|-----------|--------|----------| | neutral | 0.83 | 0.56 | 0.67 | | positive | 0.96 | 0.92 | 0.94 | | negative | 0.97 | 0.99 | 0.98 | | accuracy | | | 0.97 | | macro avg | 0.92 | 0.82 | 0.86 | | weighted avg | 0.96 | 0.97 | 0.96 | *Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)* ## How to use ### Emotion Recognition Model An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing) ``` # !pip install pyplutchik==0.0.7 # !pip install transformers==4.14.1 !git clone https://github.com/avichaychriqui/HeBERT.git from HeBERT.src.HebEMO import * HebEMO_model = HebEMO() HebEMO_model.hebemo(input_path = 'data/text_example.txt') # return analyzed pandas.DataFrame hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True) ``` <img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" /> ### For sentiment classification model (polarity ONLY): from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') >>> [[{'label': 'neutral', 'score': 0.9978172183036804}, >>> {'label': 'positive', 'score': 0.0014792329166084528}, >>> {'label': 'negative', 'score': 0.0007035882445052266}]] sentiment_analysis('קפה זה טעים') >>> [[{'label': 'neutral', 'score': 0.00047328314394690096}, >>> {'label': 'possitive', 'score': 0.9994067549705505}, >>> {'label': 'negetive', 'score': 0.00011996887042187154}]] sentiment_analysis('אני לא אוהב את העולם') >>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, >>> {'label': 'possitive', 'score': 8.876807987689972e-05}, >>> {'label': 'negetive', 'score': 0.9998190999031067}]] ## Contact us [Avichay Chriqui](mailto:[email protected]) <br> [Inbal yahav](mailto:[email protected]) <br> The Coller Semitic Languages AI Lab <br> Thank you, תודה, شكرا <br> ## If you used this model please cite us as : Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming. ``` @article{chriqui2021hebert, title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={INFORMS Journal on Data Science}, year={2022} } ```
{}
avichr/hebEMO_anticipation
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
HebEMO - Emotion Recognition Model for Modern Hebrew ==================================================== <img align="right" src="URL width="250"> HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language. Emotion UGC Data Description ---------------------------- Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences. ~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and eight emotions: anger, disgust, anticipation , fear, joy, sadness, surprise and trust. The percentage of sentences in which each emotion appeared is found in the table below. Performance ----------- ### Emotion Recognition *The above metrics is for positive class (meaning, the emotion is reflected in the text).* ### Sentiment (Polarity) Analysis *Sentiment (polarity) analysis model is also available on AWS! for more information visit AWS' git* How to use ---------- ### Emotion Recognition Model An online model can be found at huggingface spaces or as colab notebook <img src="URL width="300" height="300" /> ### For sentiment classification model (polarity ONLY): ``` from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') >>> [[{'label': 'neutral', 'score': 0.9978172183036804}, >>> {'label': 'positive', 'score': 0.0014792329166084528}, >>> {'label': 'negative', 'score': 0.0007035882445052266}]] sentiment_analysis('קפה זה טעים') >>> [[{'label': 'neutral', 'score': 0.00047328314394690096}, >>> {'label': 'possitive', 'score': 0.9994067549705505}, >>> {'label': 'negetive', 'score': 0.00011996887042187154}]] sentiment_analysis('אני לא אוהב את העולם') >>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, >>> {'label': 'possitive', 'score': 8.876807987689972e-05}, >>> {'label': 'negetive', 'score': 0.9998190999031067}]] ``` Contact us ---------- Avichay Chriqui Inbal yahav The Coller Semitic Languages AI Lab Thank you, תודה, شكرا If you used this model please cite us as : ------------------------------------------ Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming.
[ "### Emotion Recognition\n\n\n\n*The above metrics is for positive class (meaning, the emotion is reflected in the text).*", "### Sentiment (Polarity) Analysis\n\n\n\n*Sentiment (polarity) analysis model is also available on AWS! for more information visit AWS' git*\n\n\nHow to use\n----------", "### Emotion Recognition Model\n\n\nAn online model can be found at huggingface spaces or as colab notebook\n\n\n<img src=\"URL width=\"300\" height=\"300\" />", "### For sentiment classification model (polarity ONLY):\n\n\n\n```\nfrom transformers import AutoTokenizer, AutoModel, pipeline\n\ntokenizer = AutoTokenizer.from_pretrained(\"avichr/heBERT_sentiment_analysis\") #same as 'avichr/heBERT' tokenizer\nmodel = AutoModel.from_pretrained(\"avichr/heBERT_sentiment_analysis\")", "# how to use?\nsentiment_analysis = pipeline(\n \"sentiment-analysis\",\n model=\"avichr/heBERT_sentiment_analysis\",\n tokenizer=\"avichr/heBERT_sentiment_analysis\",\n return_all_scores = True\n)\n\nsentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')\t\n>>> [[{'label': 'neutral', 'score': 0.9978172183036804},\n>>> {'label': 'positive', 'score': 0.0014792329166084528},\n>>> {'label': 'negative', 'score': 0.0007035882445052266}]]\n\nsentiment_analysis('קפה זה טעים')\n>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},\n>>> {'label': 'possitive', 'score': 0.9994067549705505},\n>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]\n\nsentiment_analysis('אני לא אוהב את העולם')\n>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, \n>>> {'label': 'possitive', 'score': 8.876807987689972e-05}, \n>>> {'label': 'negetive', 'score': 0.9998190999031067}]]\n\n```\n\nContact us\n----------\n\n\nAvichay Chriqui \n\nInbal yahav \n\nThe Coller Semitic Languages AI Lab \n\nThank you, תודה, شكرا \n\n\n\nIf you used this model please cite us as :\n------------------------------------------\n\n\nChriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming." ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n", "### Emotion Recognition\n\n\n\n*The above metrics is for positive class (meaning, the emotion is reflected in the text).*", "### Sentiment (Polarity) Analysis\n\n\n\n*Sentiment (polarity) analysis model is also available on AWS! for more information visit AWS' git*\n\n\nHow to use\n----------", "### Emotion Recognition Model\n\n\nAn online model can be found at huggingface spaces or as colab notebook\n\n\n<img src=\"URL width=\"300\" height=\"300\" />", "### For sentiment classification model (polarity ONLY):\n\n\n\n```\nfrom transformers import AutoTokenizer, AutoModel, pipeline\n\ntokenizer = AutoTokenizer.from_pretrained(\"avichr/heBERT_sentiment_analysis\") #same as 'avichr/heBERT' tokenizer\nmodel = AutoModel.from_pretrained(\"avichr/heBERT_sentiment_analysis\")", "# how to use?\nsentiment_analysis = pipeline(\n \"sentiment-analysis\",\n model=\"avichr/heBERT_sentiment_analysis\",\n tokenizer=\"avichr/heBERT_sentiment_analysis\",\n return_all_scores = True\n)\n\nsentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')\t\n>>> [[{'label': 'neutral', 'score': 0.9978172183036804},\n>>> {'label': 'positive', 'score': 0.0014792329166084528},\n>>> {'label': 'negative', 'score': 0.0007035882445052266}]]\n\nsentiment_analysis('קפה זה טעים')\n>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},\n>>> {'label': 'possitive', 'score': 0.9994067549705505},\n>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]\n\nsentiment_analysis('אני לא אוהב את העולם')\n>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, \n>>> {'label': 'possitive', 'score': 8.876807987689972e-05}, \n>>> {'label': 'negetive', 'score': 0.9998190999031067}]]\n\n```\n\nContact us\n----------\n\n\nAvichay Chriqui \n\nInbal yahav \n\nThe Coller Semitic Languages AI Lab \n\nThank you, תודה, شكرا \n\n\n\nIf you used this model please cite us as :\n------------------------------------------\n\n\nChriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming." ]
text-classification
transformers
# HebEMO - Emotion Recognition Model for Modern Hebrew <img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250"> HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language. ## Emotion UGC Data Description Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences. ~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust. The percentage of sentences in which each emotion appeared is found in the table below. | | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment | |------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------| | **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 | ## Performance ### Emotion Recognition | emotion | f1-score | precision | recall | |-------------|----------|-----------|----------| | anger | 0.96 | 0.99 | 0.93 | | disgust | 0.97 | 0.98 | 0.96 | |anticipation | 0.82 | 0.80 | 0.87 | | fear | 0.79 | 0.88 | 0.72 | | joy | 0.90 | 0.97 | 0.84 | | sadness | 0.90 | 0.86 | 0.94 | | surprise | 0.40 | 0.44 | 0.37 | | trust | 0.83 | 0.86 | 0.80 | *The above metrics is for positive class (meaning, the emotion is reflected in the text).* ### Sentiment (Polarity) Analysis | | precision | recall | f1-score | |--------------|-----------|--------|----------| | neutral | 0.83 | 0.56 | 0.67 | | positive | 0.96 | 0.92 | 0.94 | | negative | 0.97 | 0.99 | 0.98 | | accuracy | | | 0.97 | | macro avg | 0.92 | 0.82 | 0.86 | | weighted avg | 0.96 | 0.97 | 0.96 | *Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)* ## How to use ### Emotion Recognition Model An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing) ``` # !pip install pyplutchik==0.0.7 # !pip install transformers==4.14.1 !git clone https://github.com/avichaychriqui/HeBERT.git from HeBERT.src.HebEMO import * HebEMO_model = HebEMO() HebEMO_model.hebemo(input_path = 'data/text_example.txt') # return analyzed pandas.DataFrame hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True) ``` <img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" /> ### For sentiment classification model (polarity ONLY): from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') >>> [[{'label': 'neutral', 'score': 0.9978172183036804}, >>> {'label': 'positive', 'score': 0.0014792329166084528}, >>> {'label': 'negative', 'score': 0.0007035882445052266}]] sentiment_analysis('קפה זה טעים') >>> [[{'label': 'neutral', 'score': 0.00047328314394690096}, >>> {'label': 'possitive', 'score': 0.9994067549705505}, >>> {'label': 'negetive', 'score': 0.00011996887042187154}]] sentiment_analysis('אני לא אוהב את העולם') >>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, >>> {'label': 'possitive', 'score': 8.876807987689972e-05}, >>> {'label': 'negetive', 'score': 0.9998190999031067}]] ## Contact us [Avichay Chriqui](mailto:[email protected]) <br> [Inbal yahav](mailto:[email protected]) <br> The Coller Semitic Languages AI Lab <br> Thank you, תודה, شكرا <br> ## If you used this model please cite us as : Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming. ``` @article{chriqui2021hebert, title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={INFORMS Journal on Data Science}, year={2022} } ```
{}
avichr/hebEMO_disgust
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
HebEMO - Emotion Recognition Model for Modern Hebrew ==================================================== <img align="right" src="URL width="250"> HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language. Emotion UGC Data Description ---------------------------- Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences. ~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and eight emotions: anger, disgust, anticipation , fear, joy, sadness, surprise and trust. The percentage of sentences in which each emotion appeared is found in the table below. Performance ----------- ### Emotion Recognition *The above metrics is for positive class (meaning, the emotion is reflected in the text).* ### Sentiment (Polarity) Analysis *Sentiment (polarity) analysis model is also available on AWS! for more information visit AWS' git* How to use ---------- ### Emotion Recognition Model An online model can be found at huggingface spaces or as colab notebook <img src="URL width="300" height="300" /> ### For sentiment classification model (polarity ONLY): ``` from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') >>> [[{'label': 'neutral', 'score': 0.9978172183036804}, >>> {'label': 'positive', 'score': 0.0014792329166084528}, >>> {'label': 'negative', 'score': 0.0007035882445052266}]] sentiment_analysis('קפה זה טעים') >>> [[{'label': 'neutral', 'score': 0.00047328314394690096}, >>> {'label': 'possitive', 'score': 0.9994067549705505}, >>> {'label': 'negetive', 'score': 0.00011996887042187154}]] sentiment_analysis('אני לא אוהב את העולם') >>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, >>> {'label': 'possitive', 'score': 8.876807987689972e-05}, >>> {'label': 'negetive', 'score': 0.9998190999031067}]] ``` Contact us ---------- Avichay Chriqui Inbal yahav The Coller Semitic Languages AI Lab Thank you, תודה, شكرا If you used this model please cite us as : ------------------------------------------ Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming.
[ "### Emotion Recognition\n\n\n\n*The above metrics is for positive class (meaning, the emotion is reflected in the text).*", "### Sentiment (Polarity) Analysis\n\n\n\n*Sentiment (polarity) analysis model is also available on AWS! for more information visit AWS' git*\n\n\nHow to use\n----------", "### Emotion Recognition Model\n\n\nAn online model can be found at huggingface spaces or as colab notebook\n\n\n<img src=\"URL width=\"300\" height=\"300\" />", "### For sentiment classification model (polarity ONLY):\n\n\n\n```\nfrom transformers import AutoTokenizer, AutoModel, pipeline\n\ntokenizer = AutoTokenizer.from_pretrained(\"avichr/heBERT_sentiment_analysis\") #same as 'avichr/heBERT' tokenizer\nmodel = AutoModel.from_pretrained(\"avichr/heBERT_sentiment_analysis\")", "# how to use?\nsentiment_analysis = pipeline(\n \"sentiment-analysis\",\n model=\"avichr/heBERT_sentiment_analysis\",\n tokenizer=\"avichr/heBERT_sentiment_analysis\",\n return_all_scores = True\n)\n\nsentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')\t\n>>> [[{'label': 'neutral', 'score': 0.9978172183036804},\n>>> {'label': 'positive', 'score': 0.0014792329166084528},\n>>> {'label': 'negative', 'score': 0.0007035882445052266}]]\n\nsentiment_analysis('קפה זה טעים')\n>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},\n>>> {'label': 'possitive', 'score': 0.9994067549705505},\n>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]\n\nsentiment_analysis('אני לא אוהב את העולם')\n>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, \n>>> {'label': 'possitive', 'score': 8.876807987689972e-05}, \n>>> {'label': 'negetive', 'score': 0.9998190999031067}]]\n\n```\n\nContact us\n----------\n\n\nAvichay Chriqui \n\nInbal yahav \n\nThe Coller Semitic Languages AI Lab \n\nThank you, תודה, شكرا \n\n\n\nIf you used this model please cite us as :\n------------------------------------------\n\n\nChriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming." ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n", "### Emotion Recognition\n\n\n\n*The above metrics is for positive class (meaning, the emotion is reflected in the text).*", "### Sentiment (Polarity) Analysis\n\n\n\n*Sentiment (polarity) analysis model is also available on AWS! for more information visit AWS' git*\n\n\nHow to use\n----------", "### Emotion Recognition Model\n\n\nAn online model can be found at huggingface spaces or as colab notebook\n\n\n<img src=\"URL width=\"300\" height=\"300\" />", "### For sentiment classification model (polarity ONLY):\n\n\n\n```\nfrom transformers import AutoTokenizer, AutoModel, pipeline\n\ntokenizer = AutoTokenizer.from_pretrained(\"avichr/heBERT_sentiment_analysis\") #same as 'avichr/heBERT' tokenizer\nmodel = AutoModel.from_pretrained(\"avichr/heBERT_sentiment_analysis\")", "# how to use?\nsentiment_analysis = pipeline(\n \"sentiment-analysis\",\n model=\"avichr/heBERT_sentiment_analysis\",\n tokenizer=\"avichr/heBERT_sentiment_analysis\",\n return_all_scores = True\n)\n\nsentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')\t\n>>> [[{'label': 'neutral', 'score': 0.9978172183036804},\n>>> {'label': 'positive', 'score': 0.0014792329166084528},\n>>> {'label': 'negative', 'score': 0.0007035882445052266}]]\n\nsentiment_analysis('קפה זה טעים')\n>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},\n>>> {'label': 'possitive', 'score': 0.9994067549705505},\n>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]\n\nsentiment_analysis('אני לא אוהב את העולם')\n>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, \n>>> {'label': 'possitive', 'score': 8.876807987689972e-05}, \n>>> {'label': 'negetive', 'score': 0.9998190999031067}]]\n\n```\n\nContact us\n----------\n\n\nAvichay Chriqui \n\nInbal yahav \n\nThe Coller Semitic Languages AI Lab \n\nThank you, תודה, شكرا \n\n\n\nIf you used this model please cite us as :\n------------------------------------------\n\n\nChriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming." ]
text-classification
transformers
# HebEMO - Emotion Recognition Model for Modern Hebrew <img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250"> HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language. ## Emotion UGC Data Description Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences. ~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust. The percentage of sentences in which each emotion appeared is found in the table below. | | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment | |------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------| | **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 | ## Performance ### Emotion Recognition | emotion | f1-score | precision | recall | |-------------|----------|-----------|----------| | anger | 0.96 | 0.99 | 0.93 | | disgust | 0.97 | 0.98 | 0.96 | |anticipation | 0.82 | 0.80 | 0.87 | | fear | 0.79 | 0.88 | 0.72 | | joy | 0.90 | 0.97 | 0.84 | | sadness | 0.90 | 0.86 | 0.94 | | surprise | 0.40 | 0.44 | 0.37 | | trust | 0.83 | 0.86 | 0.80 | *The above metrics is for positive class (meaning, the emotion is reflected in the text).* ### Sentiment (Polarity) Analysis | | precision | recall | f1-score | |--------------|-----------|--------|----------| | neutral | 0.83 | 0.56 | 0.67 | | positive | 0.96 | 0.92 | 0.94 | | negative | 0.97 | 0.99 | 0.98 | | accuracy | | | 0.97 | | macro avg | 0.92 | 0.82 | 0.86 | | weighted avg | 0.96 | 0.97 | 0.96 | *Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)* ## How to use ### Emotion Recognition Model An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing) ``` # !pip install pyplutchik==0.0.7 # !pip install transformers==4.14.1 !git clone https://github.com/avichaychriqui/HeBERT.git from HeBERT.src.HebEMO import * HebEMO_model = HebEMO() HebEMO_model.hebemo(input_path = 'data/text_example.txt') # return analyzed pandas.DataFrame hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True) ``` <img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" /> ### For sentiment classification model (polarity ONLY): from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') >>> [[{'label': 'neutral', 'score': 0.9978172183036804}, >>> {'label': 'positive', 'score': 0.0014792329166084528}, >>> {'label': 'negative', 'score': 0.0007035882445052266}]] sentiment_analysis('קפה זה טעים') >>> [[{'label': 'neutral', 'score': 0.00047328314394690096}, >>> {'label': 'possitive', 'score': 0.9994067549705505}, >>> {'label': 'negetive', 'score': 0.00011996887042187154}]] sentiment_analysis('אני לא אוהב את העולם') >>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, >>> {'label': 'possitive', 'score': 8.876807987689972e-05}, >>> {'label': 'negetive', 'score': 0.9998190999031067}]] ## Contact us [Avichay Chriqui](mailto:[email protected]) <br> [Inbal yahav](mailto:[email protected]) <br> The Coller Semitic Languages AI Lab <br> Thank you, תודה, شكرا <br> ## If you used this model please cite us as : Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming. ``` @article{chriqui2021hebert, title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={INFORMS Journal on Data Science}, year={2022} } ```
{}
avichr/hebEMO_fear
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
HebEMO - Emotion Recognition Model for Modern Hebrew ==================================================== <img align="right" src="URL width="250"> HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language. Emotion UGC Data Description ---------------------------- Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences. ~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and eight emotions: anger, disgust, anticipation , fear, joy, sadness, surprise and trust. The percentage of sentences in which each emotion appeared is found in the table below. Performance ----------- ### Emotion Recognition *The above metrics is for positive class (meaning, the emotion is reflected in the text).* ### Sentiment (Polarity) Analysis *Sentiment (polarity) analysis model is also available on AWS! for more information visit AWS' git* How to use ---------- ### Emotion Recognition Model An online model can be found at huggingface spaces or as colab notebook <img src="URL width="300" height="300" /> ### For sentiment classification model (polarity ONLY): ``` from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') >>> [[{'label': 'neutral', 'score': 0.9978172183036804}, >>> {'label': 'positive', 'score': 0.0014792329166084528}, >>> {'label': 'negative', 'score': 0.0007035882445052266}]] sentiment_analysis('קפה זה טעים') >>> [[{'label': 'neutral', 'score': 0.00047328314394690096}, >>> {'label': 'possitive', 'score': 0.9994067549705505}, >>> {'label': 'negetive', 'score': 0.00011996887042187154}]] sentiment_analysis('אני לא אוהב את העולם') >>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, >>> {'label': 'possitive', 'score': 8.876807987689972e-05}, >>> {'label': 'negetive', 'score': 0.9998190999031067}]] ``` Contact us ---------- Avichay Chriqui Inbal yahav The Coller Semitic Languages AI Lab Thank you, תודה, شكرا If you used this model please cite us as : ------------------------------------------ Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming.
[ "### Emotion Recognition\n\n\n\n*The above metrics is for positive class (meaning, the emotion is reflected in the text).*", "### Sentiment (Polarity) Analysis\n\n\n\n*Sentiment (polarity) analysis model is also available on AWS! for more information visit AWS' git*\n\n\nHow to use\n----------", "### Emotion Recognition Model\n\n\nAn online model can be found at huggingface spaces or as colab notebook\n\n\n<img src=\"URL width=\"300\" height=\"300\" />", "### For sentiment classification model (polarity ONLY):\n\n\n\n```\nfrom transformers import AutoTokenizer, AutoModel, pipeline\n\ntokenizer = AutoTokenizer.from_pretrained(\"avichr/heBERT_sentiment_analysis\") #same as 'avichr/heBERT' tokenizer\nmodel = AutoModel.from_pretrained(\"avichr/heBERT_sentiment_analysis\")", "# how to use?\nsentiment_analysis = pipeline(\n \"sentiment-analysis\",\n model=\"avichr/heBERT_sentiment_analysis\",\n tokenizer=\"avichr/heBERT_sentiment_analysis\",\n return_all_scores = True\n)\n\nsentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')\t\n>>> [[{'label': 'neutral', 'score': 0.9978172183036804},\n>>> {'label': 'positive', 'score': 0.0014792329166084528},\n>>> {'label': 'negative', 'score': 0.0007035882445052266}]]\n\nsentiment_analysis('קפה זה טעים')\n>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},\n>>> {'label': 'possitive', 'score': 0.9994067549705505},\n>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]\n\nsentiment_analysis('אני לא אוהב את העולם')\n>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, \n>>> {'label': 'possitive', 'score': 8.876807987689972e-05}, \n>>> {'label': 'negetive', 'score': 0.9998190999031067}]]\n\n```\n\nContact us\n----------\n\n\nAvichay Chriqui \n\nInbal yahav \n\nThe Coller Semitic Languages AI Lab \n\nThank you, תודה, شكرا \n\n\n\nIf you used this model please cite us as :\n------------------------------------------\n\n\nChriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming." ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n", "### Emotion Recognition\n\n\n\n*The above metrics is for positive class (meaning, the emotion is reflected in the text).*", "### Sentiment (Polarity) Analysis\n\n\n\n*Sentiment (polarity) analysis model is also available on AWS! for more information visit AWS' git*\n\n\nHow to use\n----------", "### Emotion Recognition Model\n\n\nAn online model can be found at huggingface spaces or as colab notebook\n\n\n<img src=\"URL width=\"300\" height=\"300\" />", "### For sentiment classification model (polarity ONLY):\n\n\n\n```\nfrom transformers import AutoTokenizer, AutoModel, pipeline\n\ntokenizer = AutoTokenizer.from_pretrained(\"avichr/heBERT_sentiment_analysis\") #same as 'avichr/heBERT' tokenizer\nmodel = AutoModel.from_pretrained(\"avichr/heBERT_sentiment_analysis\")", "# how to use?\nsentiment_analysis = pipeline(\n \"sentiment-analysis\",\n model=\"avichr/heBERT_sentiment_analysis\",\n tokenizer=\"avichr/heBERT_sentiment_analysis\",\n return_all_scores = True\n)\n\nsentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')\t\n>>> [[{'label': 'neutral', 'score': 0.9978172183036804},\n>>> {'label': 'positive', 'score': 0.0014792329166084528},\n>>> {'label': 'negative', 'score': 0.0007035882445052266}]]\n\nsentiment_analysis('קפה זה טעים')\n>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},\n>>> {'label': 'possitive', 'score': 0.9994067549705505},\n>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]\n\nsentiment_analysis('אני לא אוהב את העולם')\n>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, \n>>> {'label': 'possitive', 'score': 8.876807987689972e-05}, \n>>> {'label': 'negetive', 'score': 0.9998190999031067}]]\n\n```\n\nContact us\n----------\n\n\nAvichay Chriqui \n\nInbal yahav \n\nThe Coller Semitic Languages AI Lab \n\nThank you, תודה, شكرا \n\n\n\nIf you used this model please cite us as :\n------------------------------------------\n\n\nChriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming." ]
text-classification
transformers
# HebEMO - Emotion Recognition Model for Modern Hebrew <img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250"> HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language. ## Emotion UGC Data Description Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences. ~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust. The percentage of sentences in which each emotion appeared is found in the table below. | | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment | |------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------| | **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 | ## Performance ### Emotion Recognition | emotion | f1-score | precision | recall | |-------------|----------|-----------|----------| | anger | 0.96 | 0.99 | 0.93 | | disgust | 0.97 | 0.98 | 0.96 | |anticipation | 0.82 | 0.80 | 0.87 | | fear | 0.79 | 0.88 | 0.72 | | joy | 0.90 | 0.97 | 0.84 | | sadness | 0.90 | 0.86 | 0.94 | | surprise | 0.40 | 0.44 | 0.37 | | trust | 0.83 | 0.86 | 0.80 | *The above metrics is for positive class (meaning, the emotion is reflected in the text).* ### Sentiment (Polarity) Analysis | | precision | recall | f1-score | |--------------|-----------|--------|----------| | neutral | 0.83 | 0.56 | 0.67 | | positive | 0.96 | 0.92 | 0.94 | | negative | 0.97 | 0.99 | 0.98 | | accuracy | | | 0.97 | | macro avg | 0.92 | 0.82 | 0.86 | | weighted avg | 0.96 | 0.97 | 0.96 | *Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)* ## How to use ### Emotion Recognition Model An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing) ``` # !pip install pyplutchik==0.0.7 # !pip install transformers==4.14.1 !git clone https://github.com/avichaychriqui/HeBERT.git from HeBERT.src.HebEMO import * HebEMO_model = HebEMO() HebEMO_model.hebemo(input_path = 'data/text_example.txt') # return analyzed pandas.DataFrame hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True) ``` <img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" /> ### For sentiment classification model (polarity ONLY): from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') >>> [[{'label': 'neutral', 'score': 0.9978172183036804}, >>> {'label': 'positive', 'score': 0.0014792329166084528}, >>> {'label': 'negative', 'score': 0.0007035882445052266}]] sentiment_analysis('קפה זה טעים') >>> [[{'label': 'neutral', 'score': 0.00047328314394690096}, >>> {'label': 'possitive', 'score': 0.9994067549705505}, >>> {'label': 'negetive', 'score': 0.00011996887042187154}]] sentiment_analysis('אני לא אוהב את העולם') >>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, >>> {'label': 'possitive', 'score': 8.876807987689972e-05}, >>> {'label': 'negetive', 'score': 0.9998190999031067}]] ## Contact us [Avichay Chriqui](mailto:[email protected]) <br> [Inbal yahav](mailto:[email protected]) <br> The Coller Semitic Languages AI Lab <br> Thank you, תודה, شكرا <br> ## If you used this model please cite us as : Chriqui, A., & Yahav, I. (2021). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. arXiv preprint arXiv:2102.01909. ``` @article{chriqui2021hebert, title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={arXiv preprint arXiv:2102.01909}, year={2021} } ```
{}
avichr/hebEMO_joy
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
HebEMO - Emotion Recognition Model for Modern Hebrew ==================================================== <img align="right" src="URL width="250"> HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language. Emotion UGC Data Description ---------------------------- Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences. ~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and eight emotions: anger, disgust, anticipation , fear, joy, sadness, surprise and trust. The percentage of sentences in which each emotion appeared is found in the table below. Performance ----------- ### Emotion Recognition *The above metrics is for positive class (meaning, the emotion is reflected in the text).* ### Sentiment (Polarity) Analysis *Sentiment (polarity) analysis model is also available on AWS! for more information visit AWS' git* How to use ---------- ### Emotion Recognition Model An online model can be found at huggingface spaces or as colab notebook <img src="URL width="300" height="300" /> ### For sentiment classification model (polarity ONLY): ``` from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') >>> [[{'label': 'neutral', 'score': 0.9978172183036804}, >>> {'label': 'positive', 'score': 0.0014792329166084528}, >>> {'label': 'negative', 'score': 0.0007035882445052266}]] sentiment_analysis('קפה זה טעים') >>> [[{'label': 'neutral', 'score': 0.00047328314394690096}, >>> {'label': 'possitive', 'score': 0.9994067549705505}, >>> {'label': 'negetive', 'score': 0.00011996887042187154}]] sentiment_analysis('אני לא אוהב את העולם') >>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, >>> {'label': 'possitive', 'score': 8.876807987689972e-05}, >>> {'label': 'negetive', 'score': 0.9998190999031067}]] ``` Contact us ---------- Avichay Chriqui Inbal yahav The Coller Semitic Languages AI Lab Thank you, תודה, شكرا If you used this model please cite us as : ------------------------------------------ Chriqui, A., & Yahav, I. (2021). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. arXiv preprint arXiv:2102.01909.
[ "### Emotion Recognition\n\n\n\n*The above metrics is for positive class (meaning, the emotion is reflected in the text).*", "### Sentiment (Polarity) Analysis\n\n\n\n*Sentiment (polarity) analysis model is also available on AWS! for more information visit AWS' git*\n\n\nHow to use\n----------", "### Emotion Recognition Model\n\n\nAn online model can be found at huggingface spaces or as colab notebook\n\n\n<img src=\"URL width=\"300\" height=\"300\" />", "### For sentiment classification model (polarity ONLY):\n\n\n\n```\nfrom transformers import AutoTokenizer, AutoModel, pipeline\n\ntokenizer = AutoTokenizer.from_pretrained(\"avichr/heBERT_sentiment_analysis\") #same as 'avichr/heBERT' tokenizer\nmodel = AutoModel.from_pretrained(\"avichr/heBERT_sentiment_analysis\")", "# how to use?\nsentiment_analysis = pipeline(\n \"sentiment-analysis\",\n model=\"avichr/heBERT_sentiment_analysis\",\n tokenizer=\"avichr/heBERT_sentiment_analysis\",\n return_all_scores = True\n)\n\nsentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')\t\n>>> [[{'label': 'neutral', 'score': 0.9978172183036804},\n>>> {'label': 'positive', 'score': 0.0014792329166084528},\n>>> {'label': 'negative', 'score': 0.0007035882445052266}]]\n\nsentiment_analysis('קפה זה טעים')\n>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},\n>>> {'label': 'possitive', 'score': 0.9994067549705505},\n>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]\n\nsentiment_analysis('אני לא אוהב את העולם')\n>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, \n>>> {'label': 'possitive', 'score': 8.876807987689972e-05}, \n>>> {'label': 'negetive', 'score': 0.9998190999031067}]]\n\n```\n\nContact us\n----------\n\n\nAvichay Chriqui \n\nInbal yahav \n\nThe Coller Semitic Languages AI Lab \n\nThank you, תודה, شكرا \n\n\n\nIf you used this model please cite us as :\n------------------------------------------\n\n\nChriqui, A., & Yahav, I. (2021). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. arXiv preprint arXiv:2102.01909." ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n", "### Emotion Recognition\n\n\n\n*The above metrics is for positive class (meaning, the emotion is reflected in the text).*", "### Sentiment (Polarity) Analysis\n\n\n\n*Sentiment (polarity) analysis model is also available on AWS! for more information visit AWS' git*\n\n\nHow to use\n----------", "### Emotion Recognition Model\n\n\nAn online model can be found at huggingface spaces or as colab notebook\n\n\n<img src=\"URL width=\"300\" height=\"300\" />", "### For sentiment classification model (polarity ONLY):\n\n\n\n```\nfrom transformers import AutoTokenizer, AutoModel, pipeline\n\ntokenizer = AutoTokenizer.from_pretrained(\"avichr/heBERT_sentiment_analysis\") #same as 'avichr/heBERT' tokenizer\nmodel = AutoModel.from_pretrained(\"avichr/heBERT_sentiment_analysis\")", "# how to use?\nsentiment_analysis = pipeline(\n \"sentiment-analysis\",\n model=\"avichr/heBERT_sentiment_analysis\",\n tokenizer=\"avichr/heBERT_sentiment_analysis\",\n return_all_scores = True\n)\n\nsentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')\t\n>>> [[{'label': 'neutral', 'score': 0.9978172183036804},\n>>> {'label': 'positive', 'score': 0.0014792329166084528},\n>>> {'label': 'negative', 'score': 0.0007035882445052266}]]\n\nsentiment_analysis('קפה זה טעים')\n>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},\n>>> {'label': 'possitive', 'score': 0.9994067549705505},\n>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]\n\nsentiment_analysis('אני לא אוהב את העולם')\n>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, \n>>> {'label': 'possitive', 'score': 8.876807987689972e-05}, \n>>> {'label': 'negetive', 'score': 0.9998190999031067}]]\n\n```\n\nContact us\n----------\n\n\nAvichay Chriqui \n\nInbal yahav \n\nThe Coller Semitic Languages AI Lab \n\nThank you, תודה, شكرا \n\n\n\nIf you used this model please cite us as :\n------------------------------------------\n\n\nChriqui, A., & Yahav, I. (2021). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. arXiv preprint arXiv:2102.01909." ]
text-classification
transformers
# HebEMO - Emotion Recognition Model for Modern Hebrew <img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250"> HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language. ## Emotion UGC Data Description Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences. ~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust. The percentage of sentences in which each emotion appeared is found in the table below. | | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment | |------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------| | **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 | ## Performance ### Emotion Recognition | emotion | f1-score | precision | recall | |-------------|----------|-----------|----------| | anger | 0.96 | 0.99 | 0.93 | | disgust | 0.97 | 0.98 | 0.96 | |anticipation | 0.82 | 0.80 | 0.87 | | fear | 0.79 | 0.88 | 0.72 | | joy | 0.90 | 0.97 | 0.84 | | sadness | 0.90 | 0.86 | 0.94 | | surprise | 0.40 | 0.44 | 0.37 | | trust | 0.83 | 0.86 | 0.80 | *The above metrics is for positive class (meaning, the emotion is reflected in the text).* ### Sentiment (Polarity) Analysis | | precision | recall | f1-score | |--------------|-----------|--------|----------| | neutral | 0.83 | 0.56 | 0.67 | | positive | 0.96 | 0.92 | 0.94 | | negative | 0.97 | 0.99 | 0.98 | | accuracy | | | 0.97 | | macro avg | 0.92 | 0.82 | 0.86 | | weighted avg | 0.96 | 0.97 | 0.96 | *Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)* ## How to use ### Emotion Recognition Model An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing) ``` # !pip install pyplutchik==0.0.7 # !pip install transformers==4.14.1 !git clone https://github.com/avichaychriqui/HeBERT.git from HeBERT.src.HebEMO import * HebEMO_model = HebEMO() HebEMO_model.hebemo(input_path = 'data/text_example.txt') # return analyzed pandas.DataFrame hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True) ``` <img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" /> ### For sentiment classification model (polarity ONLY): from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') >>> [[{'label': 'neutral', 'score': 0.9978172183036804}, >>> {'label': 'positive', 'score': 0.0014792329166084528}, >>> {'label': 'negative', 'score': 0.0007035882445052266}]] sentiment_analysis('קפה זה טעים') >>> [[{'label': 'neutral', 'score': 0.00047328314394690096}, >>> {'label': 'possitive', 'score': 0.9994067549705505}, >>> {'label': 'negetive', 'score': 0.00011996887042187154}]] sentiment_analysis('אני לא אוהב את העולם') >>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, >>> {'label': 'possitive', 'score': 8.876807987689972e-05}, >>> {'label': 'negetive', 'score': 0.9998190999031067}]] ## Contact us [Avichay Chriqui](mailto:[email protected]) <br> [Inbal yahav](mailto:[email protected]) <br> The Coller Semitic Languages AI Lab <br> Thank you, תודה, شكرا <br> ## If you used this model please cite us as : Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming. ``` @article{chriqui2021hebert, title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={INFORMS Journal on Data Science}, year={2022} } ```
{}
avichr/hebEMO_sadness
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
HebEMO - Emotion Recognition Model for Modern Hebrew ==================================================== <img align="right" src="URL width="250"> HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language. Emotion UGC Data Description ---------------------------- Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences. ~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and eight emotions: anger, disgust, anticipation , fear, joy, sadness, surprise and trust. The percentage of sentences in which each emotion appeared is found in the table below. Performance ----------- ### Emotion Recognition *The above metrics is for positive class (meaning, the emotion is reflected in the text).* ### Sentiment (Polarity) Analysis *Sentiment (polarity) analysis model is also available on AWS! for more information visit AWS' git* How to use ---------- ### Emotion Recognition Model An online model can be found at huggingface spaces or as colab notebook <img src="URL width="300" height="300" /> ### For sentiment classification model (polarity ONLY): ``` from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') >>> [[{'label': 'neutral', 'score': 0.9978172183036804}, >>> {'label': 'positive', 'score': 0.0014792329166084528}, >>> {'label': 'negative', 'score': 0.0007035882445052266}]] sentiment_analysis('קפה זה טעים') >>> [[{'label': 'neutral', 'score': 0.00047328314394690096}, >>> {'label': 'possitive', 'score': 0.9994067549705505}, >>> {'label': 'negetive', 'score': 0.00011996887042187154}]] sentiment_analysis('אני לא אוהב את העולם') >>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, >>> {'label': 'possitive', 'score': 8.876807987689972e-05}, >>> {'label': 'negetive', 'score': 0.9998190999031067}]] ``` Contact us ---------- Avichay Chriqui Inbal yahav The Coller Semitic Languages AI Lab Thank you, תודה, شكرا If you used this model please cite us as : ------------------------------------------ Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming.
[ "### Emotion Recognition\n\n\n\n*The above metrics is for positive class (meaning, the emotion is reflected in the text).*", "### Sentiment (Polarity) Analysis\n\n\n\n*Sentiment (polarity) analysis model is also available on AWS! for more information visit AWS' git*\n\n\nHow to use\n----------", "### Emotion Recognition Model\n\n\nAn online model can be found at huggingface spaces or as colab notebook\n\n\n<img src=\"URL width=\"300\" height=\"300\" />", "### For sentiment classification model (polarity ONLY):\n\n\n\n```\nfrom transformers import AutoTokenizer, AutoModel, pipeline\n\ntokenizer = AutoTokenizer.from_pretrained(\"avichr/heBERT_sentiment_analysis\") #same as 'avichr/heBERT' tokenizer\nmodel = AutoModel.from_pretrained(\"avichr/heBERT_sentiment_analysis\")", "# how to use?\nsentiment_analysis = pipeline(\n \"sentiment-analysis\",\n model=\"avichr/heBERT_sentiment_analysis\",\n tokenizer=\"avichr/heBERT_sentiment_analysis\",\n return_all_scores = True\n)\n\nsentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')\t\n>>> [[{'label': 'neutral', 'score': 0.9978172183036804},\n>>> {'label': 'positive', 'score': 0.0014792329166084528},\n>>> {'label': 'negative', 'score': 0.0007035882445052266}]]\n\nsentiment_analysis('קפה זה טעים')\n>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},\n>>> {'label': 'possitive', 'score': 0.9994067549705505},\n>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]\n\nsentiment_analysis('אני לא אוהב את העולם')\n>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, \n>>> {'label': 'possitive', 'score': 8.876807987689972e-05}, \n>>> {'label': 'negetive', 'score': 0.9998190999031067}]]\n\n```\n\nContact us\n----------\n\n\nAvichay Chriqui \n\nInbal yahav \n\nThe Coller Semitic Languages AI Lab \n\nThank you, תודה, شكرا \n\n\n\nIf you used this model please cite us as :\n------------------------------------------\n\n\nChriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming." ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n", "### Emotion Recognition\n\n\n\n*The above metrics is for positive class (meaning, the emotion is reflected in the text).*", "### Sentiment (Polarity) Analysis\n\n\n\n*Sentiment (polarity) analysis model is also available on AWS! for more information visit AWS' git*\n\n\nHow to use\n----------", "### Emotion Recognition Model\n\n\nAn online model can be found at huggingface spaces or as colab notebook\n\n\n<img src=\"URL width=\"300\" height=\"300\" />", "### For sentiment classification model (polarity ONLY):\n\n\n\n```\nfrom transformers import AutoTokenizer, AutoModel, pipeline\n\ntokenizer = AutoTokenizer.from_pretrained(\"avichr/heBERT_sentiment_analysis\") #same as 'avichr/heBERT' tokenizer\nmodel = AutoModel.from_pretrained(\"avichr/heBERT_sentiment_analysis\")", "# how to use?\nsentiment_analysis = pipeline(\n \"sentiment-analysis\",\n model=\"avichr/heBERT_sentiment_analysis\",\n tokenizer=\"avichr/heBERT_sentiment_analysis\",\n return_all_scores = True\n)\n\nsentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')\t\n>>> [[{'label': 'neutral', 'score': 0.9978172183036804},\n>>> {'label': 'positive', 'score': 0.0014792329166084528},\n>>> {'label': 'negative', 'score': 0.0007035882445052266}]]\n\nsentiment_analysis('קפה זה טעים')\n>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},\n>>> {'label': 'possitive', 'score': 0.9994067549705505},\n>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]\n\nsentiment_analysis('אני לא אוהב את העולם')\n>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, \n>>> {'label': 'possitive', 'score': 8.876807987689972e-05}, \n>>> {'label': 'negetive', 'score': 0.9998190999031067}]]\n\n```\n\nContact us\n----------\n\n\nAvichay Chriqui \n\nInbal yahav \n\nThe Coller Semitic Languages AI Lab \n\nThank you, תודה, شكرا \n\n\n\nIf you used this model please cite us as :\n------------------------------------------\n\n\nChriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming." ]
text-classification
transformers
# HebEMO - Emotion Recognition Model for Modern Hebrew <img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250"> HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language. ## Emotion UGC Data Description Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences. ~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust. The percentage of sentences in which each emotion appeared is found in the table below. | | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment | |------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------| | **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 | ## Performance ### Emotion Recognition | emotion | f1-score | precision | recall | |-------------|----------|-----------|----------| | anger | 0.96 | 0.99 | 0.93 | | disgust | 0.97 | 0.98 | 0.96 | |anticipation | 0.82 | 0.80 | 0.87 | | fear | 0.79 | 0.88 | 0.72 | | joy | 0.90 | 0.97 | 0.84 | | sadness | 0.90 | 0.86 | 0.94 | | surprise | 0.40 | 0.44 | 0.37 | | trust | 0.83 | 0.86 | 0.80 | *The above metrics is for positive class (meaning, the emotion is reflected in the text).* ### Sentiment (Polarity) Analysis | | precision | recall | f1-score | |--------------|-----------|--------|----------| | neutral | 0.83 | 0.56 | 0.67 | | positive | 0.96 | 0.92 | 0.94 | | negative | 0.97 | 0.99 | 0.98 | | accuracy | | | 0.97 | | macro avg | 0.92 | 0.82 | 0.86 | | weighted avg | 0.96 | 0.97 | 0.96 | *Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)* ## How to use ### Emotion Recognition Model An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing) ``` # !pip install pyplutchik==0.0.7 # !pip install transformers==4.14.1 !git clone https://github.com/avichaychriqui/HeBERT.git from HeBERT.src.HebEMO import * HebEMO_model = HebEMO() HebEMO_model.hebemo(input_path = 'data/text_example.txt') # return analyzed pandas.DataFrame hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True) ``` <img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" /> ### For sentiment classification model (polarity ONLY): from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') >>> [[{'label': 'neutral', 'score': 0.9978172183036804}, >>> {'label': 'positive', 'score': 0.0014792329166084528}, >>> {'label': 'negative', 'score': 0.0007035882445052266}]] sentiment_analysis('קפה זה טעים') >>> [[{'label': 'neutral', 'score': 0.00047328314394690096}, >>> {'label': 'possitive', 'score': 0.9994067549705505}, >>> {'label': 'negetive', 'score': 0.00011996887042187154}]] sentiment_analysis('אני לא אוהב את העולם') >>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, >>> {'label': 'possitive', 'score': 8.876807987689972e-05}, >>> {'label': 'negetive', 'score': 0.9998190999031067}]] ## Contact us [Avichay Chriqui](mailto:[email protected]) <br> [Inbal yahav](mailto:[email protected]) <br> The Coller Semitic Languages AI Lab <br> Thank you, תודה, شكرا <br> ## If you used this model please cite us as : Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming. ``` @article{chriqui2021hebert, title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={INFORMS Journal on Data Science}, year={2022} } ```
{}
avichr/hebEMO_surprise
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
HebEMO - Emotion Recognition Model for Modern Hebrew ==================================================== <img align="right" src="URL width="250"> HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language. Emotion UGC Data Description ---------------------------- Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences. ~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and eight emotions: anger, disgust, anticipation , fear, joy, sadness, surprise and trust. The percentage of sentences in which each emotion appeared is found in the table below. Performance ----------- ### Emotion Recognition *The above metrics is for positive class (meaning, the emotion is reflected in the text).* ### Sentiment (Polarity) Analysis *Sentiment (polarity) analysis model is also available on AWS! for more information visit AWS' git* How to use ---------- ### Emotion Recognition Model An online model can be found at huggingface spaces or as colab notebook <img src="URL width="300" height="300" /> ### For sentiment classification model (polarity ONLY): ``` from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') >>> [[{'label': 'neutral', 'score': 0.9978172183036804}, >>> {'label': 'positive', 'score': 0.0014792329166084528}, >>> {'label': 'negative', 'score': 0.0007035882445052266}]] sentiment_analysis('קפה זה טעים') >>> [[{'label': 'neutral', 'score': 0.00047328314394690096}, >>> {'label': 'possitive', 'score': 0.9994067549705505}, >>> {'label': 'negetive', 'score': 0.00011996887042187154}]] sentiment_analysis('אני לא אוהב את העולם') >>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, >>> {'label': 'possitive', 'score': 8.876807987689972e-05}, >>> {'label': 'negetive', 'score': 0.9998190999031067}]] ``` Contact us ---------- Avichay Chriqui Inbal yahav The Coller Semitic Languages AI Lab Thank you, תודה, شكرا If you used this model please cite us as : ------------------------------------------ Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming.
[ "### Emotion Recognition\n\n\n\n*The above metrics is for positive class (meaning, the emotion is reflected in the text).*", "### Sentiment (Polarity) Analysis\n\n\n\n*Sentiment (polarity) analysis model is also available on AWS! for more information visit AWS' git*\n\n\nHow to use\n----------", "### Emotion Recognition Model\n\n\nAn online model can be found at huggingface spaces or as colab notebook\n\n\n<img src=\"URL width=\"300\" height=\"300\" />", "### For sentiment classification model (polarity ONLY):\n\n\n\n```\nfrom transformers import AutoTokenizer, AutoModel, pipeline\n\ntokenizer = AutoTokenizer.from_pretrained(\"avichr/heBERT_sentiment_analysis\") #same as 'avichr/heBERT' tokenizer\nmodel = AutoModel.from_pretrained(\"avichr/heBERT_sentiment_analysis\")", "# how to use?\nsentiment_analysis = pipeline(\n \"sentiment-analysis\",\n model=\"avichr/heBERT_sentiment_analysis\",\n tokenizer=\"avichr/heBERT_sentiment_analysis\",\n return_all_scores = True\n)\n\nsentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')\t\n>>> [[{'label': 'neutral', 'score': 0.9978172183036804},\n>>> {'label': 'positive', 'score': 0.0014792329166084528},\n>>> {'label': 'negative', 'score': 0.0007035882445052266}]]\n\nsentiment_analysis('קפה זה טעים')\n>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},\n>>> {'label': 'possitive', 'score': 0.9994067549705505},\n>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]\n\nsentiment_analysis('אני לא אוהב את העולם')\n>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, \n>>> {'label': 'possitive', 'score': 8.876807987689972e-05}, \n>>> {'label': 'negetive', 'score': 0.9998190999031067}]]\n\n```\n\nContact us\n----------\n\n\nAvichay Chriqui \n\nInbal yahav \n\nThe Coller Semitic Languages AI Lab \n\nThank you, תודה, شكرا \n\n\n\nIf you used this model please cite us as :\n------------------------------------------\n\n\nChriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming." ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n", "### Emotion Recognition\n\n\n\n*The above metrics is for positive class (meaning, the emotion is reflected in the text).*", "### Sentiment (Polarity) Analysis\n\n\n\n*Sentiment (polarity) analysis model is also available on AWS! for more information visit AWS' git*\n\n\nHow to use\n----------", "### Emotion Recognition Model\n\n\nAn online model can be found at huggingface spaces or as colab notebook\n\n\n<img src=\"URL width=\"300\" height=\"300\" />", "### For sentiment classification model (polarity ONLY):\n\n\n\n```\nfrom transformers import AutoTokenizer, AutoModel, pipeline\n\ntokenizer = AutoTokenizer.from_pretrained(\"avichr/heBERT_sentiment_analysis\") #same as 'avichr/heBERT' tokenizer\nmodel = AutoModel.from_pretrained(\"avichr/heBERT_sentiment_analysis\")", "# how to use?\nsentiment_analysis = pipeline(\n \"sentiment-analysis\",\n model=\"avichr/heBERT_sentiment_analysis\",\n tokenizer=\"avichr/heBERT_sentiment_analysis\",\n return_all_scores = True\n)\n\nsentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')\t\n>>> [[{'label': 'neutral', 'score': 0.9978172183036804},\n>>> {'label': 'positive', 'score': 0.0014792329166084528},\n>>> {'label': 'negative', 'score': 0.0007035882445052266}]]\n\nsentiment_analysis('קפה זה טעים')\n>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},\n>>> {'label': 'possitive', 'score': 0.9994067549705505},\n>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]\n\nsentiment_analysis('אני לא אוהב את העולם')\n>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, \n>>> {'label': 'possitive', 'score': 8.876807987689972e-05}, \n>>> {'label': 'negetive', 'score': 0.9998190999031067}]]\n\n```\n\nContact us\n----------\n\n\nAvichay Chriqui \n\nInbal yahav \n\nThe Coller Semitic Languages AI Lab \n\nThank you, תודה, شكرا \n\n\n\nIf you used this model please cite us as :\n------------------------------------------\n\n\nChriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming." ]
text-classification
transformers
# HebEMO - Emotion Recognition Model for Modern Hebrew <img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250"> HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language. ## Emotion UGC Data Description Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences. ~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust. The percentage of sentences in which each emotion appeared is found in the table below. | | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment | |------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------| | **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 | ## Performance ### Emotion Recognition | emotion | f1-score | precision | recall | |-------------|----------|-----------|----------| | anger | 0.96 | 0.99 | 0.93 | | disgust | 0.97 | 0.98 | 0.96 | |anticipation | 0.82 | 0.80 | 0.87 | | fear | 0.79 | 0.88 | 0.72 | | joy | 0.90 | 0.97 | 0.84 | | sadness | 0.90 | 0.86 | 0.94 | | surprise | 0.40 | 0.44 | 0.37 | | trust | 0.83 | 0.86 | 0.80 | *The above metrics is for positive class (meaning, the emotion is reflected in the text).* ### Sentiment (Polarity) Analysis | | precision | recall | f1-score | |--------------|-----------|--------|----------| | neutral | 0.83 | 0.56 | 0.67 | | positive | 0.96 | 0.92 | 0.94 | | negative | 0.97 | 0.99 | 0.98 | | accuracy | | | 0.97 | | macro avg | 0.92 | 0.82 | 0.86 | | weighted avg | 0.96 | 0.97 | 0.96 | *Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)* ## How to use ### Emotion Recognition Model An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing) ``` # !pip install pyplutchik==0.0.7 # !pip install transformers==4.14.1 !git clone https://github.com/avichaychriqui/HeBERT.git from HeBERT.src.HebEMO import * HebEMO_model = HebEMO() HebEMO_model.hebemo(input_path = 'data/text_example.txt') # return analyzed pandas.DataFrame hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True) ``` <img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" /> ### For sentiment classification model (polarity ONLY): from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') >>> [[{'label': 'neutral', 'score': 0.9978172183036804}, >>> {'label': 'positive', 'score': 0.0014792329166084528}, >>> {'label': 'negative', 'score': 0.0007035882445052266}]] sentiment_analysis('קפה זה טעים') >>> [[{'label': 'neutral', 'score': 0.00047328314394690096}, >>> {'label': 'possitive', 'score': 0.9994067549705505}, >>> {'label': 'negetive', 'score': 0.00011996887042187154}]] sentiment_analysis('אני לא אוהב את העולם') >>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, >>> {'label': 'possitive', 'score': 8.876807987689972e-05}, >>> {'label': 'negetive', 'score': 0.9998190999031067}]] ## Contact us [Avichay Chriqui](mailto:[email protected]) <br> [Inbal yahav](mailto:[email protected]) <br> The Coller Semitic Languages AI Lab <br> Thank you, תודה, شكرا <br> ## If you used this model please cite us as : Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming. ``` @article{chriqui2021hebert, title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={INFORMS Journal on Data Science}, year={2022} } ```
{}
avichr/hebEMO_trust
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
HebEMO - Emotion Recognition Model for Modern Hebrew ==================================================== <img align="right" src="URL width="250"> HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language. Emotion UGC Data Description ---------------------------- Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences. ~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and eight emotions: anger, disgust, anticipation , fear, joy, sadness, surprise and trust. The percentage of sentences in which each emotion appeared is found in the table below. Performance ----------- ### Emotion Recognition *The above metrics is for positive class (meaning, the emotion is reflected in the text).* ### Sentiment (Polarity) Analysis *Sentiment (polarity) analysis model is also available on AWS! for more information visit AWS' git* How to use ---------- ### Emotion Recognition Model An online model can be found at huggingface spaces or as colab notebook <img src="URL width="300" height="300" /> ### For sentiment classification model (polarity ONLY): ``` from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') >>> [[{'label': 'neutral', 'score': 0.9978172183036804}, >>> {'label': 'positive', 'score': 0.0014792329166084528}, >>> {'label': 'negative', 'score': 0.0007035882445052266}]] sentiment_analysis('קפה זה טעים') >>> [[{'label': 'neutral', 'score': 0.00047328314394690096}, >>> {'label': 'possitive', 'score': 0.9994067549705505}, >>> {'label': 'negetive', 'score': 0.00011996887042187154}]] sentiment_analysis('אני לא אוהב את העולם') >>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, >>> {'label': 'possitive', 'score': 8.876807987689972e-05}, >>> {'label': 'negetive', 'score': 0.9998190999031067}]] ``` Contact us ---------- Avichay Chriqui Inbal yahav The Coller Semitic Languages AI Lab Thank you, תודה, شكرا If you used this model please cite us as : ------------------------------------------ Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming.
[ "### Emotion Recognition\n\n\n\n*The above metrics is for positive class (meaning, the emotion is reflected in the text).*", "### Sentiment (Polarity) Analysis\n\n\n\n*Sentiment (polarity) analysis model is also available on AWS! for more information visit AWS' git*\n\n\nHow to use\n----------", "### Emotion Recognition Model\n\n\nAn online model can be found at huggingface spaces or as colab notebook\n\n\n<img src=\"URL width=\"300\" height=\"300\" />", "### For sentiment classification model (polarity ONLY):\n\n\n\n```\nfrom transformers import AutoTokenizer, AutoModel, pipeline\n\ntokenizer = AutoTokenizer.from_pretrained(\"avichr/heBERT_sentiment_analysis\") #same as 'avichr/heBERT' tokenizer\nmodel = AutoModel.from_pretrained(\"avichr/heBERT_sentiment_analysis\")", "# how to use?\nsentiment_analysis = pipeline(\n \"sentiment-analysis\",\n model=\"avichr/heBERT_sentiment_analysis\",\n tokenizer=\"avichr/heBERT_sentiment_analysis\",\n return_all_scores = True\n)\n\nsentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')\t\n>>> [[{'label': 'neutral', 'score': 0.9978172183036804},\n>>> {'label': 'positive', 'score': 0.0014792329166084528},\n>>> {'label': 'negative', 'score': 0.0007035882445052266}]]\n\nsentiment_analysis('קפה זה טעים')\n>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},\n>>> {'label': 'possitive', 'score': 0.9994067549705505},\n>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]\n\nsentiment_analysis('אני לא אוהב את העולם')\n>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, \n>>> {'label': 'possitive', 'score': 8.876807987689972e-05}, \n>>> {'label': 'negetive', 'score': 0.9998190999031067}]]\n\n```\n\nContact us\n----------\n\n\nAvichay Chriqui \n\nInbal yahav \n\nThe Coller Semitic Languages AI Lab \n\nThank you, תודה, شكرا \n\n\n\nIf you used this model please cite us as :\n------------------------------------------\n\n\nChriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming." ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n", "### Emotion Recognition\n\n\n\n*The above metrics is for positive class (meaning, the emotion is reflected in the text).*", "### Sentiment (Polarity) Analysis\n\n\n\n*Sentiment (polarity) analysis model is also available on AWS! for more information visit AWS' git*\n\n\nHow to use\n----------", "### Emotion Recognition Model\n\n\nAn online model can be found at huggingface spaces or as colab notebook\n\n\n<img src=\"URL width=\"300\" height=\"300\" />", "### For sentiment classification model (polarity ONLY):\n\n\n\n```\nfrom transformers import AutoTokenizer, AutoModel, pipeline\n\ntokenizer = AutoTokenizer.from_pretrained(\"avichr/heBERT_sentiment_analysis\") #same as 'avichr/heBERT' tokenizer\nmodel = AutoModel.from_pretrained(\"avichr/heBERT_sentiment_analysis\")", "# how to use?\nsentiment_analysis = pipeline(\n \"sentiment-analysis\",\n model=\"avichr/heBERT_sentiment_analysis\",\n tokenizer=\"avichr/heBERT_sentiment_analysis\",\n return_all_scores = True\n)\n\nsentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')\t\n>>> [[{'label': 'neutral', 'score': 0.9978172183036804},\n>>> {'label': 'positive', 'score': 0.0014792329166084528},\n>>> {'label': 'negative', 'score': 0.0007035882445052266}]]\n\nsentiment_analysis('קפה זה טעים')\n>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},\n>>> {'label': 'possitive', 'score': 0.9994067549705505},\n>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]\n\nsentiment_analysis('אני לא אוהב את העולם')\n>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05}, \n>>> {'label': 'possitive', 'score': 8.876807987689972e-05}, \n>>> {'label': 'negetive', 'score': 0.9998190999031067}]]\n\n```\n\nContact us\n----------\n\n\nAvichay Chriqui \n\nInbal yahav \n\nThe Coller Semitic Languages AI Lab \n\nThank you, תודה, شكرا \n\n\n\nIf you used this model please cite us as :\n------------------------------------------\n\n\nChriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming." ]
text-generation
transformers
# rickbot Dialo-GPT
{"tags": ["conversational"]}
avinashshrangee/DialoGPT-small-Ricky
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# rickbot Dialo-GPT
[ "# rickbot Dialo-GPT" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# rickbot Dialo-GPT" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.2125 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.2637 | 1.0 | 5533 | 1.2125 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"]}
avioo1/distilbert-base-uncased-finetuned-squad
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-squad ======================================= This model is a fine-tuned version of distilbert-base-uncased on the squad dataset. It achieves the following results on the evaluation set: * Loss: 1.2125 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.10.2 * Pytorch 1.9.0+cu102 * Datasets 1.11.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-squad2-finetuned-squad This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 5.0220 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 74 | 1.7148 | | No log | 2.0 | 148 | 1.6994 | | No log | 3.0 | 222 | 1.7922 | | No log | 4.0 | 296 | 1.9947 | | No log | 5.0 | 370 | 2.0753 | | No log | 6.0 | 444 | 2.2096 | | 0.9547 | 7.0 | 518 | 2.3070 | | 0.9547 | 8.0 | 592 | 2.6947 | | 0.9547 | 9.0 | 666 | 2.7169 | | 0.9547 | 10.0 | 740 | 2.8503 | | 0.9547 | 11.0 | 814 | 3.1990 | | 0.9547 | 12.0 | 888 | 3.4931 | | 0.9547 | 13.0 | 962 | 3.6575 | | 0.3191 | 14.0 | 1036 | 3.1863 | | 0.3191 | 15.0 | 1110 | 3.7922 | | 0.3191 | 16.0 | 1184 | 3.6336 | | 0.3191 | 17.0 | 1258 | 4.1156 | | 0.3191 | 18.0 | 1332 | 4.1353 | | 0.3191 | 19.0 | 1406 | 3.9888 | | 0.3191 | 20.0 | 1480 | 4.4290 | | 0.1904 | 21.0 | 1554 | 4.0473 | | 0.1904 | 22.0 | 1628 | 4.5048 | | 0.1904 | 23.0 | 1702 | 4.4026 | | 0.1904 | 24.0 | 1776 | 4.2864 | | 0.1904 | 25.0 | 1850 | 4.3941 | | 0.1904 | 26.0 | 1924 | 4.4921 | | 0.1904 | 27.0 | 1998 | 4.9139 | | 0.1342 | 28.0 | 2072 | 4.8914 | | 0.1342 | 29.0 | 2146 | 5.0148 | | 0.1342 | 30.0 | 2220 | 5.0220 | ### Framework versions - Transformers 4.11.0 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
{"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "roberta-base-squad2-finetuned-squad", "results": []}]}
avioo1/roberta-base-squad2-finetuned-squad
null
[ "transformers", "pytorch", "tensorboard", "roberta", "question-answering", "generated_from_trainer", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #roberta #question-answering #generated_from_trainer #license-cc-by-4.0 #endpoints_compatible #region-us
roberta-base-squad2-finetuned-squad =================================== This model is a fine-tuned version of deepset/roberta-base-squad2 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 5.0220 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 30 ### Training results ### Framework versions * Transformers 4.11.0 * Pytorch 1.9.0+cu102 * Datasets 1.12.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.0\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #roberta #question-answering #generated_from_trainer #license-cc-by-4.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.0\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4981 - Matthews Correlation: 0.4218 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5248 | 1.0 | 535 | 0.4981 | 0.4218 | ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model_index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metric": {"name": "Matthews Correlation", "type": "matthews_correlation", "value": 0.42176824452830747}}]}]}
avneet/distilbert-base-uncased-finetuned-cola
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-cola ====================================== This model is a fine-tuned version of distilbert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.4981 * Matthews Correlation: 0.4218 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.9.1 * Pytorch 1.9.0+cu102 * Datasets 1.10.2 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.9.0+cu102\n* Datasets 1.10.2\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.9.0+cu102\n* Datasets 1.10.2\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-sst2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.3651 - Accuracy: 0.9151 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.1902 | 1.0 | 4210 | 0.3102 | 0.9117 | | 0.1293 | 2.0 | 8420 | 0.3672 | 0.9048 | | 0.084 | 3.0 | 12630 | 0.3651 | 0.9151 | | 0.0682 | 4.0 | 16840 | 0.3971 | 0.9037 | | 0.0438 | 5.0 | 21050 | 0.4720 | 0.9117 | ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model_index": [{"name": "distilbert-base-uncased-finetuned-sst2", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "glue", "type": "glue", "args": "sst2"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9151376146788991}}]}]}
avneet/distilbert-base-uncased-finetuned-sst2
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-sst2 ====================================== This model is a fine-tuned version of distilbert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.3651 * Accuracy: 0.9151 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.9.1 * Pytorch 1.9.0+cu102 * Datasets 1.11.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
text-generation
transformers
---- tags: - conversational --- #Rick DialoGPT model
{}
avnish100/DialoGPT-small-rick
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
---- tags: - conversational --- #Rick DialoGPT model
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
## Описание модели Этот чатбот - дипломная работа студента Андрея Ворожко в УИИ (Университет Искусственного Интеллекта). Окончание обучения - март 2022 года. Чатбот сделан на основе модели [Kirili4ik/ruDialoGpt3-medium-finetuned-telegram](https://huggingface.co/Kirili4ik/ruDialoGpt3-medium-finetuned-telegram) Теперь модель дообучена на основе 27000 анекдотов (14 эпох, скорость обучения в колабе 2-6 часов на эпоху) и умеет понимать контекст разговора. Однако контекст приходится ограничивать несколькими последними сообщениями потому что чем больше контекста тем медленнее модель работает, а контекст растет как снежный ком в процессе разговора. Инференс находится в [spaces](https://huggingface.co/spaces/avorozhko/funbot): Там с ботом можно поговорить. Контекст ограничен 10 последними сообщениями. Шутки бот выдает, но пока скорее случайно, чем намеренно. Однако разговор поддержать способен и даже немного развлечь. Так как это генерация текста, то на одну и ту же фразу бот всегда будет выдавать разные ответы. Также для определения качества данной модели использовалась кастомная метрика - угловое расстояния между эмбеддингами y_train и предикта. То есть мы взяли первый слой эмбеддинга модели и прогоняли предикты и лейблы, получили вектора слов. Потом вектора слов суммировали и получили общие (суммарные) вектора лейблов и предиктов. Чем меньше угол между ними, тем лучше. При рассчетах ориентировались на косинус этого угла, так как cos 0 = 1, то это очень удобно - чем ближе показатель к 1, тем лучше. Вот такое распределение этих значений получилось по эпохам на ПРОВЕРОЧНОЙ выборке (1406 анекдотов): ``` {1: tensor(0.9357, device='cuda:0', grad_fn=<DivBackward0>), 2: tensor(0.9390, device='cuda:0', grad_fn=<DivBackward0>), 3: tensor(0.9417, device='cuda:0', grad_fn=<DivBackward0>), 4: tensor(0.9439, device='cuda:0', grad_fn=<DivBackward0>), 5: tensor(0.9470, device='cuda:0', grad_fn=<DivBackward0>), 6: tensor(0.9537, device='cuda:0', grad_fn=<DivBackward0>), 7: tensor(0.9568, device='cuda:0', grad_fn=<DivBackward0>), 8: tensor(0.9592, device='cuda:0', grad_fn=<DivBackward0>), 9: tensor(0.9610, device='cuda:0', grad_fn=<DivBackward0>), 10: tensor(0.9622, device='cuda:0', grad_fn=<DivBackward0>), 11: tensor(0.9628, device='cuda:0', grad_fn=<DivBackward0>), 12: tensor(0.9632, device='cuda:0', grad_fn=<DivBackward0>), 13: tensor(0.9630, device='cuda:0', grad_fn=<DivBackward0>), 14: tensor(0.9634, device='cuda:0', grad_fn=<DivBackward0>), 15: tensor(0.9634, device='cuda:0', grad_fn=<DivBackward0>)} ``` Для инференса выбрана 14-я эпоха с точностью 0.9634. Далее, судя по всему идет уже переобучение.
{}
avorozhko/ruDialoGpt3-medium-finetuned-context
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
## Описание модели Этот чатбот - дипломная работа студента Андрея Ворожко в УИИ (Университет Искусственного Интеллекта). Окончание обучения - март 2022 года. Чатбот сделан на основе модели Kirili4ik/ruDialoGpt3-medium-finetuned-telegram Теперь модель дообучена на основе 27000 анекдотов (14 эпох, скорость обучения в колабе 2-6 часов на эпоху) и умеет понимать контекст разговора. Однако контекст приходится ограничивать несколькими последними сообщениями потому что чем больше контекста тем медленнее модель работает, а контекст растет как снежный ком в процессе разговора. Инференс находится в spaces: Там с ботом можно поговорить. Контекст ограничен 10 последними сообщениями. Шутки бот выдает, но пока скорее случайно, чем намеренно. Однако разговор поддержать способен и даже немного развлечь. Так как это генерация текста, то на одну и ту же фразу бот всегда будет выдавать разные ответы. Также для определения качества данной модели использовалась кастомная метрика - угловое расстояния между эмбеддингами y_train и предикта. То есть мы взяли первый слой эмбеддинга модели и прогоняли предикты и лейблы, получили вектора слов. Потом вектора слов суммировали и получили общие (суммарные) вектора лейблов и предиктов. Чем меньше угол между ними, тем лучше. При рассчетах ориентировались на косинус этого угла, так как cos 0 = 1, то это очень удобно - чем ближе показатель к 1, тем лучше. Вот такое распределение этих значений получилось по эпохам на ПРОВЕРОЧНОЙ выборке (1406 анекдотов): Для инференса выбрана 14-я эпоха с точностью 0.9634. Далее, судя по всему идет уже переобучение.
[ "## Описание модели\n\nЭтот чатбот - дипломная работа студента Андрея Ворожко в УИИ (Университет Искусственного Интеллекта).\n\nОкончание обучения - март 2022 года.\n\nЧатбот сделан на основе модели Kirili4ik/ruDialoGpt3-medium-finetuned-telegram\n\nТеперь модель дообучена на основе 27000 анекдотов (14 эпох, скорость обучения в колабе 2-6 часов на эпоху) и умеет понимать контекст разговора. Однако контекст приходится ограничивать несколькими последними сообщениями потому что чем больше контекста тем медленнее модель работает, а контекст растет как снежный ком в процессе разговора.\n\nИнференс находится в spaces:\n\nТам с ботом можно поговорить. Контекст ограничен 10 последними сообщениями.\n\nШутки бот выдает, но пока скорее случайно, чем намеренно. Однако разговор поддержать способен и даже немного развлечь.\n\nТак как это генерация текста, то на одну и ту же фразу бот всегда будет выдавать разные ответы.\n\nТакже для определения качества данной модели использовалась кастомная метрика - угловое расстояния между эмбеддингами y_train и предикта.\n\nТо есть мы взяли первый слой эмбеддинга модели и прогоняли предикты и лейблы, получили вектора слов. Потом вектора слов суммировали и получили общие (суммарные) вектора лейблов и предиктов. Чем меньше угол между ними, тем лучше. При рассчетах ориентировались на косинус этого угла, так как cos 0 = 1, то это очень удобно - чем ближе показатель к 1, тем лучше.\n\nВот такое распределение этих значений получилось по эпохам на ПРОВЕРОЧНОЙ выборке (1406 анекдотов):\n\nДля инференса выбрана 14-я эпоха с точностью 0.9634. Далее, судя по всему идет уже переобучение." ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "## Описание модели\n\nЭтот чатбот - дипломная работа студента Андрея Ворожко в УИИ (Университет Искусственного Интеллекта).\n\nОкончание обучения - март 2022 года.\n\nЧатбот сделан на основе модели Kirili4ik/ruDialoGpt3-medium-finetuned-telegram\n\nТеперь модель дообучена на основе 27000 анекдотов (14 эпох, скорость обучения в колабе 2-6 часов на эпоху) и умеет понимать контекст разговора. Однако контекст приходится ограничивать несколькими последними сообщениями потому что чем больше контекста тем медленнее модель работает, а контекст растет как снежный ком в процессе разговора.\n\nИнференс находится в spaces:\n\nТам с ботом можно поговорить. Контекст ограничен 10 последними сообщениями.\n\nШутки бот выдает, но пока скорее случайно, чем намеренно. Однако разговор поддержать способен и даже немного развлечь.\n\nТак как это генерация текста, то на одну и ту же фразу бот всегда будет выдавать разные ответы.\n\nТакже для определения качества данной модели использовалась кастомная метрика - угловое расстояния между эмбеддингами y_train и предикта.\n\nТо есть мы взяли первый слой эмбеддинга модели и прогоняли предикты и лейблы, получили вектора слов. Потом вектора слов суммировали и получили общие (суммарные) вектора лейблов и предиктов. Чем меньше угол между ними, тем лучше. При рассчетах ориентировались на косинус этого угла, так как cos 0 = 1, то это очень удобно - чем ближе показатель к 1, тем лучше.\n\nВот такое распределение этих значений получилось по эпохам на ПРОВЕРОЧНОЙ выборке (1406 анекдотов):\n\nДля инференса выбрана 14-я эпоха с точностью 0.9634. Далее, судя по всему идет уже переобучение." ]
null
keras
# [Deep Chimpact](https://www.drivendata.org/competitions/82/competition-wildlife-video-depth-estimation/page/390/) > Depth Estimation for Wildlife Conservation (1st place solution) <div align=center> <img src="https://user-images.githubusercontent.com/36858976/138281204-c3cbcb77-11ca-448b-a693-cb3cfa3c5181.png" width=800> ## Overview Healthy natural ecosystems have wide-ranging benefits from public health to the economy to agriculture. In order to protect the Earth's natural resources, conservationists need to be able to monitor species population sizes and population change. Camera traps are widely used in conservation research to capture images and videos of wildlife without human interference. Using statistical models for distance sampling, the frequency of animal sightings can be combined with the distance of each animal from the camera to estimate a species' full population size. However, getting distances from camera trap footage currently entails an extremely manual, time-intensive process. It takes a researcher more than **10 minutes** on average to label distance for every **1 minute** of video - that’s a lot of time when you have a million videos! This also creates a bottleneck for critical information that conservationists can use to **monitor wildlife populations**. > Your goal in this challenge is to use machine learning to automatically estimate the distance between a camera trap and an animal in a series of camera trap videos. You will be given a series of timestamps indicating when animals are visible in each camera trap video. To complete the challenge, you will predict the distance between the animal and the camera at each point in time. Along the way, keep an eye out for some sneaky leopards hunting at night, baby chimpanzees getting piggy-back rides, and diva elephants that can't get enough of the limelight. By contributing to this challenge, you can help advance cutting-edge methods for keeping these animal populations (and humans) healthy and safe!
{}
awsaf49/deep-chimpact
null
[ "keras", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #keras #region-us
# Deep Chimpact > Depth Estimation for Wildlife Conservation (1st place solution) <div align=center> <img src="URL width=800> ## Overview Healthy natural ecosystems have wide-ranging benefits from public health to the economy to agriculture. In order to protect the Earth's natural resources, conservationists need to be able to monitor species population sizes and population change. Camera traps are widely used in conservation research to capture images and videos of wildlife without human interference. Using statistical models for distance sampling, the frequency of animal sightings can be combined with the distance of each animal from the camera to estimate a species' full population size. However, getting distances from camera trap footage currently entails an extremely manual, time-intensive process. It takes a researcher more than 10 minutes on average to label distance for every 1 minute of video - that’s a lot of time when you have a million videos! This also creates a bottleneck for critical information that conservationists can use to monitor wildlife populations. > Your goal in this challenge is to use machine learning to automatically estimate the distance between a camera trap and an animal in a series of camera trap videos. You will be given a series of timestamps indicating when animals are visible in each camera trap video. To complete the challenge, you will predict the distance between the animal and the camera at each point in time. Along the way, keep an eye out for some sneaky leopards hunting at night, baby chimpanzees getting piggy-back rides, and diva elephants that can't get enough of the limelight. By contributing to this challenge, you can help advance cutting-edge methods for keeping these animal populations (and humans) healthy and safe!
[ "# Deep Chimpact\n> Depth Estimation for Wildlife Conservation (1st place solution)\n\n<div align=center> <img src=\"URL width=800>", "## Overview\n\nHealthy natural ecosystems have wide-ranging benefits from public health to the economy to agriculture. In order to protect the Earth's natural resources, conservationists need to be able to monitor species population sizes and population change. Camera traps are widely used in conservation research to capture images and videos of wildlife without human interference. Using statistical models for distance sampling, the frequency of animal sightings can be combined with the distance of each animal from the camera to estimate a species' full population size.\n\nHowever, getting distances from camera trap footage currently entails an extremely manual, time-intensive process. It takes a researcher more than 10 minutes on average to label distance for every 1 minute of video - that’s a lot of time when you have a million videos! This also creates a bottleneck for critical information that conservationists can use to monitor wildlife populations.\n\n> Your goal in this challenge is to use machine learning to automatically estimate the distance between a camera trap and an animal in a series of camera trap videos. You will be given a series of timestamps indicating when animals are visible in each camera trap video. To complete the challenge, you will predict the distance between the animal and the camera at each point in time.\n\nAlong the way, keep an eye out for some sneaky leopards hunting at night, baby chimpanzees getting piggy-back rides, and diva elephants that can't get enough of the limelight. By contributing to this challenge, you can help advance cutting-edge methods for keeping these animal populations (and humans) healthy and safe!" ]
[ "TAGS\n#keras #region-us \n", "# Deep Chimpact\n> Depth Estimation for Wildlife Conservation (1st place solution)\n\n<div align=center> <img src=\"URL width=800>", "## Overview\n\nHealthy natural ecosystems have wide-ranging benefits from public health to the economy to agriculture. In order to protect the Earth's natural resources, conservationists need to be able to monitor species population sizes and population change. Camera traps are widely used in conservation research to capture images and videos of wildlife without human interference. Using statistical models for distance sampling, the frequency of animal sightings can be combined with the distance of each animal from the camera to estimate a species' full population size.\n\nHowever, getting distances from camera trap footage currently entails an extremely manual, time-intensive process. It takes a researcher more than 10 minutes on average to label distance for every 1 minute of video - that’s a lot of time when you have a million videos! This also creates a bottleneck for critical information that conservationists can use to monitor wildlife populations.\n\n> Your goal in this challenge is to use machine learning to automatically estimate the distance between a camera trap and an animal in a series of camera trap videos. You will be given a series of timestamps indicating when animals are visible in each camera trap video. To complete the challenge, you will predict the distance between the animal and the camera at each point in time.\n\nAlong the way, keep an eye out for some sneaky leopards hunting at night, baby chimpanzees getting piggy-back rides, and diva elephants that can't get enough of the limelight. By contributing to this challenge, you can help advance cutting-edge methods for keeping these animal populations (and humans) healthy and safe!" ]
text-generation
transformers
# My Awesome Model
{"tags": ["conversational"]}
awvik360/DialoGPT-medium-plemons
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# My Awesome Model
[ "# My Awesome Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# My Awesome Model" ]
text-generation
null
# My Awesome Model
{"tags": ["conversational"]}
awvik360/DialoGPT-medium-plemons2
null
[ "conversational", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #conversational #region-us
# My Awesome Model
[ "# My Awesome Model" ]
[ "TAGS\n#conversational #region-us \n", "# My Awesome Model" ]
text-generation
transformers
# My Awesome Model
{"tags": ["conversational"]}
awvik360/DialoGPT-small-plemons
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# My Awesome Model
[ "# My Awesome Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# My Awesome Model" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-indonesian-1.5G-finetuned-sentiment-analysis-smsa This model is a fine-tuned version of [cahya/bert-base-indonesian-1.5G](https://huggingface.co/cahya/bert-base-indonesian-1.5G) on the indonlu dataset. It achieves the following results on the evaluation set: - Loss: 0.3390 - Accuracy: 0.9373 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2864 | 1.0 | 688 | 0.2154 | 0.9286 | | 0.1648 | 2.0 | 1376 | 0.2238 | 0.9357 | | 0.0759 | 3.0 | 2064 | 0.3351 | 0.9365 | | 0.044 | 4.0 | 2752 | 0.3390 | 0.9373 | | 0.0308 | 5.0 | 3440 | 0.4346 | 0.9365 | | 0.0113 | 6.0 | 4128 | 0.4708 | 0.9365 | | 0.006 | 7.0 | 4816 | 0.5533 | 0.9325 | | 0.0047 | 8.0 | 5504 | 0.5888 | 0.9310 | | 0.0001 | 9.0 | 6192 | 0.5961 | 0.9333 | | 0.0 | 10.0 | 6880 | 0.5992 | 0.9357 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"language": "id", "license": "mit", "tags": ["generated_from_trainer"], "datasets": ["indonlu"], "metrics": ["accuracy"], "widget": [{"text": "Saya mengapresiasi usaha anda"}], "model-index": [{"name": "bert-base-indonesian-1.5G-finetuned-sentiment-analysis-smsa", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "indonlu", "type": "indonlu", "args": "smsa"}, "metrics": [{"type": "accuracy", "value": 0.9373015873015873, "name": "Accuracy"}]}]}]}
ayameRushia/bert-base-indonesian-1.5G-sentiment-analysis-smsa
null
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "id", "dataset:indonlu", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "id" ]
TAGS #transformers #pytorch #bert #text-classification #generated_from_trainer #id #dataset-indonlu #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
bert-base-indonesian-1.5G-finetuned-sentiment-analysis-smsa =========================================================== This model is a fine-tuned version of cahya/bert-base-indonesian-1.5G on the indonlu dataset. It achieves the following results on the evaluation set: * Loss: 0.3390 * Accuracy: 0.9373 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.14.1 * Pytorch 1.10.0+cu111 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #generated_from_trainer #id #dataset-indonlu #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
text-generation
transformers
# Indonesian GPT-2-medium finetuned on Indonesian poems This is the [Indonesian gpt2-medium model](https://huggingface.co/flax-community/gpt2-medium-indonesian) fine-tuned to Indonesian poems. The dataset can be found in [here](https://huggingface.co/datasets/id_puisi) All training was done on Google Colab Jupyter Notebook (soon). The dataset is splitted into two subset with details belows: | split | count (examples) | percentage | | ---------- | ---------- | -------------- | | train | 7,358 | 80% | | validation | 1,890 | 20% | ### Evaluation results The model evaluation results after 10 epochs are as follows: | dataset | train/loss | eval/loss | eval perplexity | | ---------- | ---------- | -------------- | ---------- | | [id puisi](https://huggingface.co/datasets/id_puisi) | 3.104 | 3.384 | 29.4884 | The logs can be found in [wandb page here](https://wandb.ai/ayamerushia/gpt-2_poem/runs/3jsu1orj/overview?workspace=user-ayamerushia)
{"language": "id", "widget": [{"text": "Wahai rembulan yang tertutup awan hujan"}]}
ayameRushia/gpt2-medium-fine-tuning-indonesia-poem
null
[ "transformers", "pytorch", "gpt2", "text-generation", "id", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "id" ]
TAGS #transformers #pytorch #gpt2 #text-generation #id #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
Indonesian GPT-2-medium finetuned on Indonesian poems ===================================================== This is the Indonesian gpt2-medium model fine-tuned to Indonesian poems. The dataset can be found in here All training was done on Google Colab Jupyter Notebook (soon). The dataset is splitted into two subset with details belows: split: train, count (examples): 7,358, percentage: 80% split: validation, count (examples): 1,890, percentage: 20% ### Evaluation results The model evaluation results after 10 epochs are as follows: The logs can be found in wandb page here
[ "### Evaluation results\n\n\nThe model evaluation results after 10 epochs are as follows:\n\n\n\nThe logs can be found in wandb page here" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #id #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Evaluation results\n\n\nThe model evaluation results after 10 epochs are as follows:\n\n\n\nThe logs can be found in wandb page here" ]
text-generation
transformers
# Indonesian GPT-2 finetuned on Indonesian poems This is the [Indonesian gpt2-small model](https://huggingface.co/flax-community/gpt2-small-indonesian) fine-tuned to Indonesian poems. The dataset can be found in [here](https://huggingface.co/datasets/id_puisi) All training was done on Google Colab Jupyter Notebook (soon). The dataset is splitted into two subset with details belows: | split | count (examples) | percentage | | ---------- | ---------- | -------------- | | train | 7,358 | 80% | | validation | 1,890 | 20% | ### Evaluation results The model evaluation results after 10 epochs are as follows: | dataset | train/loss | eval/loss | eval perplexity | | ---------- | ---------- | -------------- | ---------- | | [id puisi](https://huggingface.co/datasets/id_puisi) | 3.324700 | 3.502665 | 33.20 | The logs can be found in [wandb page here](https://wandb.ai/ayamerushia/gpt-2_poem/runs/36ymudz9/overview?workspace=user-ayamerushia) or tensorboard [here](https://huggingface.co/ayameRushia/gpt2-small-indonesia-fine-tuning-poem/tensorboard)
{"language": "id", "widget": [{"text": "Wahai rembulan yang tertutup awan hujan"}]}
ayameRushia/gpt2-small-indonesia-fine-tuning-poem
null
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "id", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "id" ]
TAGS #transformers #pytorch #tensorboard #gpt2 #text-generation #id #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
Indonesian GPT-2 finetuned on Indonesian poems ============================================== This is the Indonesian gpt2-small model fine-tuned to Indonesian poems. The dataset can be found in here All training was done on Google Colab Jupyter Notebook (soon). The dataset is splitted into two subset with details belows: split: train, count (examples): 7,358, percentage: 80% split: validation, count (examples): 1,890, percentage: 20% ### Evaluation results The model evaluation results after 10 epochs are as follows: The logs can be found in wandb page here or tensorboard here
[ "### Evaluation results\n\n\nThe model evaluation results after 10 epochs are as follows:\n\n\n\nThe logs can be found in wandb page here or tensorboard here" ]
[ "TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #id #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Evaluation results\n\n\nThe model evaluation results after 10 epochs are as follows:\n\n\n\nThe logs can be found in wandb page here or tensorboard here" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # indobert-base-uncased-finetuned-indonlu-smsa This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on the indonlu dataset. It achieves the following results on the evaluation set: - Loss: 0.2277 - Accuracy: 0.9302 - F1: 0.9066 - Precision: 0.8992 - Recall: 0.9147 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1500 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 1.0 | 344 | 0.3831 | 0.8476 | 0.7715 | 0.7817 | 0.7627 | | 0.4167 | 2.0 | 688 | 0.2809 | 0.8905 | 0.8406 | 0.8699 | 0.8185 | | 0.2624 | 3.0 | 1032 | 0.2254 | 0.9230 | 0.8842 | 0.9004 | 0.8714 | | 0.2624 | 4.0 | 1376 | 0.2378 | 0.9238 | 0.8797 | 0.9180 | 0.8594 | | 0.1865 | 5.0 | 1720 | 0.2277 | 0.9302 | 0.9066 | 0.8992 | 0.9147 | | 0.1217 | 6.0 | 2064 | 0.2444 | 0.9262 | 0.8981 | 0.9013 | 0.8957 | | 0.1217 | 7.0 | 2408 | 0.2985 | 0.9286 | 0.8999 | 0.9035 | 0.8971 | | 0.0847 | 8.0 | 2752 | 0.3397 | 0.9278 | 0.8969 | 0.9090 | 0.8871 | | 0.0551 | 9.0 | 3096 | 0.3542 | 0.9270 | 0.8961 | 0.9010 | 0.8924 | | 0.0551 | 10.0 | 3440 | 0.3862 | 0.9222 | 0.8895 | 0.8970 | 0.8846 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"language": "id", "license": "mit", "tags": ["generated_from_trainer"], "datasets": ["indonlu"], "metrics": ["accuracy", "f1", "precision", "recall"], "widget": [{"text": "Entah mengapa saya merasakan ada sesuatu yang janggal di produk ini"}], "model-index": [{"name": "indobert-base-uncased-finetuned-indonlu-smsa", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "indonlu", "type": "indonlu", "args": "smsa"}, "metrics": [{"type": "accuracy", "value": 0.9301587301587302, "name": "Accuracy"}, {"type": "f1", "value": 0.9066105299178986, "name": "F1"}, {"type": "precision", "value": 0.8992078788375845, "name": "Precision"}, {"type": "recall", "value": 0.9147307323234121, "name": "Recall"}]}]}]}
ayameRushia/indobert-base-uncased-finetuned-indonlu-smsa
null
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "generated_from_trainer", "id", "dataset:indonlu", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "id" ]
TAGS #transformers #pytorch #safetensors #bert #text-classification #generated_from_trainer #id #dataset-indonlu #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
indobert-base-uncased-finetuned-indonlu-smsa ============================================ This model is a fine-tuned version of indolem/indobert-base-uncased on the indonlu dataset. It achieves the following results on the evaluation set: * Loss: 0.2277 * Accuracy: 0.9302 * F1: 0.9066 * Precision: 0.8992 * Recall: 0.9147 ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 1500 * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.14.1 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1500\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #safetensors #bert #text-classification #generated_from_trainer #id #dataset-indonlu #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1500\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-indonesian-1.5G-sentiment-analysis-smsa This model is a fine-tuned version of [cahya/roberta-base-indonesian-1.5G](https://huggingface.co/cahya/roberta-base-indonesian-1.5G) on the indonlu dataset. It achieves the following results on the evaluation set: - Loss: 0.4294 - Accuracy: 0.9262 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1500 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6461 | 1.0 | 688 | 0.2620 | 0.9087 | | 0.2627 | 2.0 | 1376 | 0.2291 | 0.9151 | | 0.1784 | 3.0 | 2064 | 0.2891 | 0.9167 | | 0.1099 | 4.0 | 2752 | 0.3317 | 0.9230 | | 0.0857 | 5.0 | 3440 | 0.4294 | 0.9262 | | 0.0346 | 6.0 | 4128 | 0.4759 | 0.9246 | | 0.0221 | 7.0 | 4816 | 0.4946 | 0.9206 | | 0.006 | 8.0 | 5504 | 0.5823 | 0.9175 | | 0.0047 | 9.0 | 6192 | 0.5777 | 0.9159 | | 0.004 | 10.0 | 6880 | 0.5800 | 0.9175 | ### How to use this model in Transformers Library ```python from transformers import pipeline pipe = pipeline( "text-classification", model="ayameRushia/roberta-base-indonesian-1.5G-sentiment-analysis-smsa" ) pipe("Terima kasih atas bantuannya ya!") ``` ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"language": ["id"], "tags": ["generated_from_trainer"], "datasets": ["indonlp/indonlu"], "metrics": ["accuracy"], "widget": [{"text": "Entah mengapa saya merasakan ada sesuatu yang janggal di produk ini"}], "model-index": [{"name": "roberta-base-indonesian-1.5G-sentiment-analysis-smsa", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "indonlu", "type": "indonlu", "args": "smsa"}, "metrics": [{"type": "accuracy", "value": 0.9261904761904762, "name": "Accuracy"}]}]}]}
ayameRushia/roberta-base-indonesian-1.5G-sentiment-analysis-smsa
null
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "id", "dataset:indonlp/indonlu", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "id" ]
TAGS #transformers #pytorch #roberta #text-classification #generated_from_trainer #id #dataset-indonlp/indonlu #model-index #autotrain_compatible #endpoints_compatible #region-us
roberta-base-indonesian-1.5G-sentiment-analysis-smsa ==================================================== This model is a fine-tuned version of cahya/roberta-base-indonesian-1.5G on the indonlu dataset. It achieves the following results on the evaluation set: * Loss: 0.4294 * Accuracy: 0.9262 ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 1500 * num\_epochs: 10 ### Training results ### How to use this model in Transformers Library ### Framework versions * Transformers 4.14.1 * Pytorch 1.10.0+cu111 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1500\n* num\\_epochs: 10", "### Training results", "### How to use this model in Transformers Library", "### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #roberta #text-classification #generated_from_trainer #id #dataset-indonlp/indonlu #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1500\n* num\\_epochs: 10", "### Training results", "### How to use this model in Transformers Library", "### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-indonesian-sentiment-analysis-smsa This model is a fine-tuned version of [flax-community/indonesian-roberta-base](https://huggingface.co/flax-community/indonesian-roberta-base) on the indonlu dataset. It achieves the following results on the evaluation set: - Loss: 0.4252 - Accuracy: 0.9349 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7582 | 1.0 | 688 | 0.3280 | 0.8786 | | 0.3225 | 2.0 | 1376 | 0.2398 | 0.9206 | | 0.2057 | 3.0 | 2064 | 0.2574 | 0.9230 | | 0.1642 | 4.0 | 2752 | 0.2820 | 0.9302 | | 0.1266 | 5.0 | 3440 | 0.3344 | 0.9317 | | 0.0608 | 6.0 | 4128 | 0.3543 | 0.9341 | | 0.058 | 7.0 | 4816 | 0.4252 | 0.9349 | | 0.0315 | 8.0 | 5504 | 0.4736 | 0.9310 | | 0.0166 | 9.0 | 6192 | 0.4649 | 0.9349 | | 0.0143 | 10.0 | 6880 | 0.4648 | 0.9341 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["indonlu"], "metrics": ["accuracy"], "model-index": [{"name": "roberta-base-indonesian-sentiment-analysis-smsa", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "indonlu", "type": "indonlu", "args": "smsa"}, "metrics": [{"type": "accuracy", "value": 0.9349206349206349, "name": "Accuracy"}]}]}]}
ayameRushia/roberta-base-indonesian-sentiment-analysis-smsa
null
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "dataset:indonlu", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #roberta #text-classification #generated_from_trainer #dataset-indonlu #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
roberta-base-indonesian-sentiment-analysis-smsa =============================================== This model is a fine-tuned version of flax-community/indonesian-roberta-base on the indonlu dataset. It achieves the following results on the evaluation set: * Loss: 0.4252 * Accuracy: 0.9349 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.14.1 * Pytorch 1.10.0+cu111 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #roberta #text-classification #generated_from_trainer #dataset-indonlu #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-ar This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.4819 - Wer: 0.4244 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 11.0435 | 0.67 | 400 | 4.3104 | 1.0 | | 3.4451 | 1.34 | 800 | 3.1566 | 1.0 | | 3.1399 | 2.01 | 1200 | 3.0532 | 0.9990 | | 2.8538 | 2.68 | 1600 | 1.6994 | 0.9238 | | 1.7195 | 3.35 | 2000 | 0.8867 | 0.6727 | | 1.326 | 4.02 | 2400 | 0.6603 | 0.5834 | | 1.1561 | 4.69 | 2800 | 0.5809 | 0.5479 | | 1.0764 | 5.36 | 3200 | 0.5943 | 0.5495 | | 1.0144 | 6.03 | 3600 | 0.5344 | 0.5251 | | 0.965 | 6.7 | 4000 | 0.4844 | 0.4936 | | 0.927 | 7.37 | 4400 | 0.5048 | 0.5019 | | 0.8985 | 8.04 | 4800 | 0.5809 | 0.5267 | | 0.8684 | 8.71 | 5200 | 0.4740 | 0.4753 | | 0.8581 | 9.38 | 5600 | 0.4813 | 0.4834 | | 0.8334 | 10.05 | 6000 | 0.4515 | 0.4545 | | 0.8134 | 10.72 | 6400 | 0.4370 | 0.4543 | | 0.8002 | 11.39 | 6800 | 0.4225 | 0.4384 | | 0.7884 | 12.06 | 7200 | 0.4593 | 0.4565 | | 0.7675 | 12.73 | 7600 | 0.4752 | 0.4680 | | 0.7607 | 13.4 | 8000 | 0.4950 | 0.4771 | | 0.7475 | 14.07 | 8400 | 0.4373 | 0.4391 | | 0.7397 | 14.74 | 8800 | 0.4506 | 0.4541 | | 0.7289 | 15.41 | 9200 | 0.4840 | 0.4691 | | 0.722 | 16.08 | 9600 | 0.4701 | 0.4571 | | 0.7067 | 16.75 | 10000 | 0.4561 | 0.4461 | | 0.7033 | 17.42 | 10400 | 0.4384 | 0.4347 | | 0.6915 | 18.09 | 10800 | 0.4424 | 0.4290 | | 0.6854 | 18.76 | 11200 | 0.4635 | 0.4360 | | 0.6813 | 19.43 | 11600 | 0.4280 | 0.4147 | | 0.6776 | 20.1 | 12000 | 0.4610 | 0.4344 | | 0.67 | 20.77 | 12400 | 0.4540 | 0.4367 | | 0.6653 | 21.44 | 12800 | 0.4509 | 0.4234 | | 0.6609 | 22.11 | 13200 | 0.4874 | 0.4444 | | 0.6541 | 22.78 | 13600 | 0.4542 | 0.4230 | | 0.6528 | 23.45 | 14000 | 0.4732 | 0.4373 | | 0.6463 | 24.12 | 14400 | 0.4483 | 0.4188 | | 0.6399 | 24.79 | 14800 | 0.4731 | 0.4341 | | 0.6353 | 25.46 | 15200 | 0.5031 | 0.4412 | | 0.6358 | 26.13 | 15600 | 0.4986 | 0.4397 | | 0.6317 | 26.8 | 16000 | 0.5000 | 0.4360 | | 0.6262 | 27.47 | 16400 | 0.4958 | 0.4318 | | 0.6317 | 28.14 | 16800 | 0.4738 | 0.4234 | | 0.6205 | 28.81 | 17200 | 0.4853 | 0.4262 | | 0.6205 | 29.48 | 17600 | 0.4819 | 0.4244 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-ar", "results": []}]}
ayameRushia/wav2vec2-large-xls-r-300m-ar
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-large-xls-r-300m-ar ============================ This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset. It achieves the following results on the evaluation set: * Loss: 0.4819 * Wer: 0.4244 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 3e-05 * train\_batch\_size: 32 * eval\_batch\_size: 4 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 400 * num\_epochs: 30 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.17.0.dev0 * Pytorch 1.10.2+cu102 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 400\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 400\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - EL dataset. It achieves the following results on the evaluation set: - Loss: 0.3218 - Wer: 0.3095 ## Training and evaluation data Evaluation is conducted in Notebook, you can see within the repo "notebook_evaluation_wav2vec2_el.ipynb" Test WER without LM wer = 31.1294 % cer = 7.9509 % Test WER using LM wer = 20.7340 % cer = 6.0466 % How to use eval.py ``` huggingface-cli login #login to huggingface for getting auth token to access the common voice v8 #running with LM !python eval.py --model_id ayameRushia/wav2vec2-large-xls-r-300m-el --dataset mozilla-foundation/common_voice_8_0 --config el --split test # running without LM !python eval.py --model_id ayameRushia/wav2vec2-large-xls-r-300m-el --dataset mozilla-foundation/common_voice_8_0 --config el --split test --greedy ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 80.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6.3683 | 8.77 | 500 | 3.1280 | 1.0 | | 1.9915 | 17.54 | 1000 | 0.6600 | 0.6444 | | 0.6565 | 26.32 | 1500 | 0.4208 | 0.4486 | | 0.4484 | 35.09 | 2000 | 0.3885 | 0.4006 | | 0.3573 | 43.86 | 2500 | 0.3548 | 0.3626 | | 0.3063 | 52.63 | 3000 | 0.3375 | 0.3430 | | 0.2751 | 61.4 | 3500 | 0.3359 | 0.3241 | | 0.2511 | 70.18 | 4000 | 0.3222 | 0.3108 | | 0.2361 | 78.95 | 4500 | 0.3205 | 0.3084 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
{"language": ["el"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-el", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "el"}, "metrics": [{"type": "wer", "value": 20.9, "name": "Test WER using LM"}, {"type": "cer", "value": 6.0466, "name": "Test CER using LM"}]}]}]}
ayameRushia/wav2vec2-large-xls-r-300m-el
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "el", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "el" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #el #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - EL dataset. It achieves the following results on the evaluation set: * Loss: 0.3218 * Wer: 0.3095 Training and evaluation data ---------------------------- Evaluation is conducted in Notebook, you can see within the repo "notebook\_evaluation\_wav2vec2\_el.ipynb" Test WER without LM wer = 31.1294 % cer = 7.9509 % Test WER using LM wer = 20.7340 % cer = 6.0466 % How to use URL Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 32 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 400 * num\_epochs: 80.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.17.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 400\n* num\\_epochs: 80.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #el #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 400\n* num\\_epochs: 80.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]