modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-23 12:32:37
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
571 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-23 12:31:07
card
stringlengths
11
1.01M
sagar/pretrained-FinBERT
sagar
2021-01-04T04:34:18Z
2
0
transformers
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
FinBert Pretrained model to be used with downstream tasks
thilina/mt5-sinhalese-english
thilina
2021-01-03T21:14:26Z
65
8
transformers
[ "transformers", "pytorch", "tf", "mt5", "text2text-generation", "translation", "si", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - si - en tags: - translation license: apache-2.0 metrics: - sacrebleu --- # mt5-sinhalese-english ## Model description An mT5-base model fine-tuned on the Sinhalese-English dataset in the Tatoeba Challenge. Can be used to translate from Sinhalese to English and vice versa. ## Training details - English - Sinhala dataset from the Tatoeba Challenge [Datasets](https://github.com/Helsinki-NLP/Tatoeba-Challenge/blob/master/Data.md) - [mT5-base pre-trained weights](https://huggingface.co/google/mt5-base) ## Eval results SacreBLEU score: - English to Sinhalese: 10.3 - Sinhalese to English: 24.4
julien-c/ljspeech_tts_train_tacotron2_raw_phn_tacotron_g2p_en_no_space_train
julien-c
2020-12-27T18:47:01Z
14
2
espnet
[ "espnet", "audio", "text-to-speech", "en", "dataset:ljspeech", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: en datasets: - ljspeech license: cc-by-4.0 widget: - text: "Hello, how are you doing?" --- ## Example ESPnet2 TTS model ### `kan-bayashi/ljspeech_tts_train_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.loss.best` ♻️ Imported from https://zenodo.org/record/3989498#.X90RlOlKjkM This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Training config See full config in [`config.yaml`](./config.yaml) ```yaml config: conf/tuning/train_tacotron2.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/tts_train_tacotron2_raw ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true ```
monologg/koelectra-small-discriminator
monologg
2020-12-26T16:23:23Z
168
0
transformers
[ "transformers", "pytorch", "electra", "pretraining", "ko", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: ko --- # KoELECTRA (Small Discriminator) Pretrained ELECTRA Language Model for Korean (`koelectra-small-discriminator`) For more detail, please see [original repository](https://github.com/monologg/KoELECTRA/blob/master/README_EN.md). ## Usage ### Load model and tokenizer ```python >>> from transformers import ElectraModel, ElectraTokenizer >>> model = ElectraModel.from_pretrained("monologg/koelectra-small-discriminator") >>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-small-discriminator") ``` ### Tokenizer example ```python >>> from transformers import ElectraTokenizer >>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-small-discriminator") >>> tokenizer.tokenize("[CLS] 한국어 ELECTRA를 공유합니다. [SEP]") ['[CLS]', '한국어', 'E', '##L', '##EC', '##T', '##RA', '##를', '공유', '##합니다', '.', '[SEP]'] >>> tokenizer.convert_tokens_to_ids(['[CLS]', '한국어', 'E', '##L', '##EC', '##T', '##RA', '##를', '공유', '##합니다', '.', '[SEP]']) [2, 18429, 41, 6240, 15229, 6204, 20894, 5689, 12622, 10690, 18, 3] ``` ## Example using ElectraForPreTraining ```python import torch from transformers import ElectraForPreTraining, ElectraTokenizer discriminator = ElectraForPreTraining.from_pretrained("monologg/koelectra-small-discriminator") tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-small-discriminator") sentence = "나는 방금 밥을 먹었다." fake_sentence = "나는 내일 밥을 먹었다." fake_tokens = tokenizer.tokenize(fake_sentence) fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") discriminator_outputs = discriminator(fake_inputs) predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) print(list(zip(fake_tokens, predictions.tolist()[1:-1]))) ```
m3hrdadfi/albert-fa-base-v2-sentiment-snappfood
m3hrdadfi
2020-12-26T08:49:28Z
7
0
transformers
[ "transformers", "pytorch", "tf", "albert", "text-classification", "fa", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: fa license: apache-2.0 --- # ALBERT Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT. Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. ## Persian Sentiment [Digikala, SnappFood, DeepSentiPers] It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types. ### SnappFood [Snappfood](https://snappfood.ir/) (an online food delivery company) user comments containing 70,000 comments with two labels (i.e. polarity classification): 1. Happy 2. Sad | Label | # | |:--------:|:-----:| | Negative | 35000 | | Positive | 35000 | **Download** You can download the dataset from [here](https://drive.google.com/uc?id=15J4zPN1BD7Q_ZIQ39VeFquwSoW8qTxgu) ## Results The following table summarizes the F1 score obtained as compared to other models and architectures. | Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | DeepSentiPers | |:------------------------:|:-----------------:|:-----------:|:-----:|:-------------:| | SnappFood User Comments | 85.79 | 88.12 | 87.87 | - | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERTPersian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
m3hrdadfi/albert-fa-base-v2-sentiment-digikala
m3hrdadfi
2020-12-26T08:48:33Z
5
0
transformers
[ "transformers", "pytorch", "tf", "albert", "text-classification", "fa", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: fa license: apache-2.0 --- # ALBERT Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT. Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. ## Persian Sentiment [Digikala, SnappFood, DeepSentiPers] It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types. ### Digikala Digikala user comments provided by [Open Data Mining Program (ODMP)](https://www.digikala.com/opendata/). This dataset contains 62,321 user comments with three labels: | Label | # | |:---------------:|:------:| | no_idea | 10394 | | not_recommended | 15885 | | recommended | 36042 | **Download** You can download the dataset from [here](https://www.digikala.com/opendata/) ## Results The following table summarizes the F1 score obtained as compared to other models and architectures. | Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | DeepSentiPers | |:------------------------:|:-----------------:|:-----------:|:-----:|:-------------:| | Digikala User Comments | 81.12 | 81.74 | 80.74 | - | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERTPersian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
m3hrdadfi/albert-fa-base-v2-sentiment-multi
m3hrdadfi
2020-12-26T08:46:20Z
8
1
transformers
[ "transformers", "pytorch", "tf", "albert", "text-classification", "fa", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: fa license: apache-2.0 --- # ALBERT Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT. Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. ## Persian Sentiment [Digikala, SnappFood, DeepSentiPers] It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types. ## Results The model obtained an F1 score of 70.72% for a composition of all three datasets into a multi-labels `Negative`, `Neutral` and `Positive`. ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERTPersian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
m3hrdadfi/albert-fa-base-v2-sentiment-deepsentipers-multi
m3hrdadfi
2020-12-26T08:42:15Z
44
0
transformers
[ "transformers", "pytorch", "tf", "albert", "text-classification", "fa", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: fa license: apache-2.0 --- # ALBERT Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT. Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. ## Persian Sentiment [Digikala, SnappFood, DeepSentiPers] It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types. ### DeepSentiPers which is a balanced and augmented version of SentiPers, contains 12,138 user opinions about digital products labeled with five different classes; two positives (i.e., happy and delighted), two negatives (i.e., furious and angry) and one neutral class. Therefore, this dataset can be utilized for both multi-class and binary classification. In the case of binary classification, the neutral class and its corresponding sentences are removed from the dataset. **Binary:** 1. Negative (Furious + Angry) 2. Positive (Happy + Delighted) **Multi** 1. Furious 2. Angry 3. Neutral 4. Happy 5. Delighted | Label | # | |:---------:|:----:| | Furious | 236 | | Angry | 1357 | | Neutral | 2874 | | Happy | 2848 | | Delighted | 2516 | **Download** You can download the dataset from: - [SentiPers](https://github.com/phosseini/sentipers) - [DeepSentiPers](https://github.com/JoyeBright/DeepSentiPers) ## Results The following table summarizes the F1 score obtained as compared to other models and architectures. | Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | DeepSentiPers | |:------------------------:|:-----------------:|:-----------:|:-----:|:-------------:| | SentiPers (Multi Class) | 66.12 | 71.11 | - | 69.33 | | SentiPers (Binary Class) | 91.09 | 92.13 | - | 91.98 | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERTPersian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
m3hrdadfi/albert-fa-base-v2-sentiment-deepsentipers-binary
m3hrdadfi
2020-12-26T08:42:08Z
7
0
transformers
[ "transformers", "pytorch", "tf", "albert", "text-classification", "fa", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: fa license: apache-2.0 --- # ALBERT Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT. Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. ## Persian Sentiment [Digikala, SnappFood, DeepSentiPers] It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types. ### DeepSentiPers which is a balanced and augmented version of SentiPers, contains 12,138 user opinions about digital products labeled with five different classes; two positives (i.e., happy and delighted), two negatives (i.e., furious and angry) and one neutral class. Therefore, this dataset can be utilized for both multi-class and binary classification. In the case of binary classification, the neutral class and its corresponding sentences are removed from the dataset. **Binary:** 1. Negative (Furious + Angry) 2. Positive (Happy + Delighted) **Multi** 1. Furious 2. Angry 3. Neutral 4. Happy 5. Delighted | Label | # | |:---------:|:----:| | Furious | 236 | | Angry | 1357 | | Neutral | 2874 | | Happy | 2848 | | Delighted | 2516 | **Download** You can download the dataset from: - [SentiPers](https://github.com/phosseini/sentipers) - [DeepSentiPers](https://github.com/JoyeBright/DeepSentiPers) ## Results The following table summarizes the F1 score obtained as compared to other models and architectures. | Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | DeepSentiPers | |:------------------------:|:-----------------:|:-----------:|:-----:|:-------------:| | SentiPers (Multi Class) | 66.12 | 71.11 | - | 69.33 | | SentiPers (Binary Class) | 91.09 | 92.13 | - | 91.98 | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERTPersian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
m3hrdadfi/albert-fa-base-v2-clf-persiannews
m3hrdadfi
2020-12-26T08:36:46Z
27
3
transformers
[ "transformers", "pytorch", "tf", "albert", "text-classification", "fa", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: fa license: apache-2.0 --- # ALBERT Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT. Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. ## Persian Text Classification [DigiMag, Persian News] The task target is labeling texts in a supervised manner in both existing datasets `DigiMag` and `Persian News`. ### Persian News A dataset of various news articles scraped from different online news agencies' websites. The total number of articles is 16,438, spread over eight different classes. 1. Economic 2. International 3. Political 4. Science Technology 5. Cultural Art 6. Sport 7. Medical | Label | # | |:------------------:|:----:| | Social | 2170 | | Economic | 1564 | | International | 1975 | | Political | 2269 | | Science Technology | 2436 | | Cultural Art | 2558 | | Sport | 1381 | | Medical | 2085 | **Download** You can download the dataset from [here](https://drive.google.com/uc?id=1B6xotfXCcW9xS1mYSBQos7OCg0ratzKC) ## Results The following table summarizes the F1 score obtained as compared to other models and architectures. | Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | |:-----------------:|:-----------------:|:-----------:|:-----:| | Persian News | 97.01 | 97.19 | 95.79 | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERTPersian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
m3hrdadfi/albert-fa-base-v2-ner-peyma
m3hrdadfi
2020-12-26T08:36:20Z
4
1
transformers
[ "transformers", "pytorch", "tf", "albert", "token-classification", "fa", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: fa license: apache-2.0 --- # ALBERT Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT. Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. ## Persian NER [ARMAN, PEYMA] This task aims to extract named entities in the text, such as names and label with appropriate `NER` classes such as locations, organizations, etc. The datasets used for this task contain sentences that are marked with `IOB` format. In this format, tokens that are not part of an entity are tagged as `”O”` the `”B”`tag corresponds to the first word of an object, and the `”I”` tag corresponds to the rest of the terms of the same entity. Both `”B”` and `”I”` tags are followed by a hyphen (or underscore), followed by the entity category. Therefore, the NER task is a multi-class token classification problem that labels the tokens upon being fed a raw text. There are two primary datasets used in Persian NER, `ARMAN`, and `PEYMA`. ### PEYMA PEYMA dataset includes 7,145 sentences with a total of 302,530 tokens from which 41,148 tokens are tagged with seven different classes. 1. Organization 2. Money 3. Location 4. Date 5. Time 6. Person 7. Percent | Label | # | |:------------:|:-----:| | Organization | 16964 | | Money | 2037 | | Location | 8782 | | Date | 4259 | | Time | 732 | | Person | 7675 | | Percent | 699 | **Download** You can download the dataset from [here](http://nsurl.org/tasks/task-7-named-entity-recognition-ner-for-farsi/) ## Results The following table summarizes the F1 score obtained as compared to other models and architectures. | Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF | |:-------:|:-----------------:|:-----------:|:-----:|:----------:|:------------:|:--------:|:--------------:|:----------:| | PEYMA | 88.99 | 93.10 | 86.64 | - | 90.59 | - | 84.00 | - | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERTPersian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
m3hrdadfi/albert-fa-base-v2-clf-digimag
m3hrdadfi
2020-12-26T08:28:59Z
4
0
transformers
[ "transformers", "pytorch", "tf", "albert", "text-classification", "fa", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: fa license: apache-2.0 --- # ALBERT Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT. Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. ## Persian Text Classification [DigiMag, Persian News] The task target is labeling texts in a supervised manner in both existing datasets `DigiMag` and `Persian News`. ### DigiMag A total of 8,515 articles scraped from [Digikala Online Magazine](https://www.digikala.com/mag/). This dataset includes seven different classes. 1. Video Games 2. Shopping Guide 3. Health Beauty 4. Science Technology 5. General 6. Art Cinema 7. Books Literature | Label | # | |:------------------:|:----:| | Video Games | 1967 | | Shopping Guide | 125 | | Health Beauty | 1610 | | Science Technology | 2772 | | General | 120 | | Art Cinema | 1667 | | Books Literature | 254 | **Download** You can download the dataset from [here](https://drive.google.com/uc?id=1YgrCYY-Z0h2z0-PfWVfOGt1Tv0JDI-qz) ## Results The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures. | Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | |:-----------------:|:-----------------:|:-----------:|:-----:| | Digikala Magazine | 92.33 | 93.59 | 90.72 | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERTPersian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
lordtt13/t5-inshorts
lordtt13
2020-12-25T23:05:41Z
5
0
transformers
[ "transformers", "pytorch", "tf", "t5", "text2text-generation", "en", "arxiv:1910.10683", "autotrain_compatible", "text-generation-inference", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en inference: false --- ## T5-inshorts: T5 model trained on inshorts data ### Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* and here is the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code. ### Details of the downstream task (Summarization) - Dataset 📚 - The summarization data has been taken from [Inshorts News Data](https://www.kaggle.com/shashichander009/inshorts-news-data) from kaggle. Inshorts is a news service that provides short summaries of news from around the web. This dataset contains headlines and summary of news items along with its source. ### Model training The training script is present [here](https://github.com/lordtt13/transformers-experiments/blob/master/Custom%20Tasks/fine-tune-t5-for-summarization.ipynb). ### Pipelining the Model ```python import transformers model = transformers.T5ForConditionalGeneration.from_pretrained('lordtt13/t5-inshorts') tokenizer = transformers.T5Tokenizer.from_pretrained("lordtt13/t5-inshorts") nlp_fill = transformers.pipeline('summarization', model = model, tokenizer = tokenizer) nlp_fill('The CBI on Saturday booked four former officials of Syndicate Bank and six others for cheating, forgery, criminal conspiracy and causing ₹209 crore loss to the state-run bank. The accused had availed home loans and credit from Syndicate Bank on the basis of forged and fabricated documents. These funds were fraudulently transferred to the companies owned by the accused persons.', min_length=5, max_length=40) # Output: # [{'summary_text': ': CBI books 4 ex-bank officials for cheating, forgery'}] ``` > Created by [Tanmay Thakur](https://github.com/lordtt13) | [LinkedIn](https://www.linkedin.com/in/tanmay-thakur-6bb5a9154/) > PS: Still looking for more resources to expand my expansion!
julien-c/voice-activity-detection
julien-c
2020-12-21T22:38:05Z
4
16
null
[ "pytorch", "pyannote", "audio", "voice-activity-detection", "dataset:dihard", "arxiv:1910.10655", "license:mit", "region:us" ]
voice-activity-detection
2022-03-02T23:29:05Z
--- tags: - pyannote - audio - voice-activity-detection datasets: - dihard license: mit inference: false --- ## Example pyannote-audio Voice Activity Detection model ### `pyannote.audio.models.segmentation.PyanNet` ♻️ Imported from https://github.com/pyannote/pyannote-audio-hub This model was trained by @hbredin. ### Demo: How to use in pyannote-audio ```python from pyannote.audio.core.inference import Inference model = Inference('julien-c/voice-activity-detection', device='cuda') model({ "audio": "TheBigBangTheory.wav" }) ``` ### Citing pyannote-audio ```BibTex @inproceedings{Bredin2020, Title = {{pyannote.audio: neural building blocks for speaker diarization}}, Author = {{Bredin}, Herv{\'e} and {Yin}, Ruiqing and {Coria}, Juan Manuel and {Gelly}, Gregory and {Korshunov}, Pavel and {Lavechin}, Marvin and {Fustes}, Diego and {Titeux}, Hadrien and {Bouaziz}, Wassim and {Gill}, Marie-Philippe}, Booktitle = {ICASSP 2020, IEEE International Conference on Acoustics, Speech, and Signal Processing}, Address = {Barcelona, Spain}, Month = {May}, Year = {2020}, } ``` or ```bibtex @inproceedings{Lavechin2020, author = {Marvin Lavechin and Marie-Philippe Gill and Ruben Bousbib and Herv\'{e} Bredin and Leibny Paola Garcia-Perera}, title = {{End-to-end Domain-Adversarial Voice Activity Detection}}, year = {2020}, url = {https://arxiv.org/abs/1910.10655}, } ```
panggi/t5-small-indonesian-summarization-cased
panggi
2020-12-19T18:01:23Z
149
2
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "pipeline:summarization", "summarization", "id", "dataset:indosum", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- language: id tags: - pipeline:summarization - summarization - t5 datasets: - indosum --- # Indonesian T5 Summarization Small Model Finetuned T5 small summarization model for Indonesian. ## Finetuning Corpus `t5-small-indonesian-summarization-cased` model is based on `t5-small-bahasa-summarization-cased` by [huseinzol05](https://huggingface.co/huseinzol05), finetuned using [indosum](https://github.com/kata-ai/indosum) dataset. ## Load Finetuned Model ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("panggi/t5-small-indonesian-summarization-cased") model = T5ForConditionalGeneration.from_pretrained("panggi/t5-small-indonesian-summarization-cased") ``` ## Code Sample ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("panggi/t5-small-indonesian-summarization-cased") model = T5ForConditionalGeneration.from_pretrained("panggi/t5-small-indonesian-summarization-cased") # https://www.sehatq.com/artikel/apa-itu-dispepsia-fungsional-ketahui-gejala-dan-faktor-risikonya ARTICLE_TO_SUMMARIZE = "Secara umum, dispepsia adalah kumpulan gejala pada saluran pencernaan seperti nyeri, sensasi terbakar, dan rasa tidak nyaman pada perut bagian atas. Pada beberapa kasus, dispepsia yang dialami seseorang tidak dapat diketahui penyebabnya. Jenis dispepsia ini disebut dengan dispepsia fungsional. Apa saja gejala dispepsia fungsional? Apa itu dispepsia fungsional? Dispepsia fungsional adalah kumpulan gejala tanpa sebab pada saluran pencernaan bagian atas. Gejala tersebut dapat berupa rasa sakit, nyeri, dan tak nyaman pada perut bagian atas atau ulu hati. Penderita dispepsia fungsional juga akan merasakan kenyang lebih cepat dan sensasi perut penuh berkepanjangan. Gejala-gejala tersebut bisa berlangsung selama sebulan atau lebih. Dispepsia ini memiliki nama “fungsional” karena kumpulan gejalanya tidak memiliki penyebab yang jelas. Dilihat dari fungsi dan struktur saluran pencernaan, dokter tidak menemukan hal yang salah. Namun, gejalanya bisa sangat mengganggu dan menyiksa. Dispepsia fungsional disebut juga dengan dispepsia nonulkus. Diperkirakan bahwa 20% masyarakat dunia menderita dispepsia fungsional. Kondisi ini berisiko tinggi dialami oleh wanita, perokok, dan orang yang mengonsumsi obat anti-peradangan nonsteroid (NSAID). Dispepsia fungsional bisa bersifat kronis dan mengganggu kehidupan penderitanya. Namun beruntung, ada beberapa strategi yang bisa diterapkan untuk mengendalikan gejala dispepsia ini. Strategi tersebut termasuk perubahan gaya hidup, obat-obatan, dan terapi.Ragam gejala dispepsia fungsional Gejala dispepsia fungsional dapat bervariasi antara satu pasien dengan pasien lain. Beberapa tanda yang bisa dirasakan seseorang, yaitu: Sensasi terbakar atau nyeri di saluran pencernaan bagian atas Perut kembung Cepat merasa kenyang walau baru makan sedikit Mual Muntah Bersendawa Rasa asam di mulut Penurunan berat badan Tekanan psikologis terkait dengan kondisi yang dialami Apa sebenarnya penyebab dispepsia fungsional? Sebagai penyakit fungsional, dokter mengkategorikan dispepsia ini sebagai penyakit yang tidak diketahui penyebabnya. Hanya saja, beberapa faktor bisa meningkatkan risiko seseorang terkena dispepsia fungsional. Faktor risiko tersebut, termasuk: Alergi terhadap zat tertentu Perubahan mikrobioma usus Infeksi, seperti yang dipicu oleh bakteriHelicobacter pylori Sekresi asam lambung yang tidak normal Peradangan pada saluran pencernaan bagian atas Gangguan pada fungsi lambung untuk mencerna makanan Pola makan tertentu Gaya hidup tidak sehat Stres Kecemasan atau depresi Efek samping pemakaian obat seperti obat antiinflamasi nonsteroid Penanganan untuk dispepsia fungsional Ada banyak pilihan pengobatan untuk dispepsia fungsional. Seperti yang disampaikan di atas, tidak ada penyebab tunggal dispepsia ini yang bisa diketahui. Gejala yang dialami antara satu pasien juga mungkin amat berbeda dari orang lain. Dengan demikian, jenis pengobatan dispepsia fungsional juga akan bervariasi. Beberapa pilihan strategi penanganan untuk dispepsia fungsional, meliputi: 1. Obat-obatan Ada beberapa jenis obat yang mungkin akan diberikan dokter, seperti Obat penetral asam lambung yang disebut penghambat reseptor H2 Obat penghambat produksi asam lambung yang disebut proton pump inhibitors Obat untuk mengendalikan gas di perut yang mengandung simetikon Antidepresan seperti amitriptyline Obat penguat kerongkongan yang disebut agen prokinetik Obat untuk pengosongan isi lambung seperti metoclopramide Antibiotik jika dokter mendeteksi adanya infeksi bakteri H. pylori 2. Anjuran terkait perubahan gaya hidup Selain obat-obatan, dokter akan memberikan rekomendasi perubahan gaya hidup yang harus diterapkan pasien. Tips terkait perubahan gaya hidup termasuk: Makan lebih sering namun dengan porsi yang lebih sedikit Menjauhi makanan berlemak karena memperlambat pengosongan makanan di lambung Menjauhi jenis makanan lain yang memicu gejala dispepsia, seperti makanan pedas, makanan tinggi asam, produk susu, dan produk kafein Menjauhi rokok Dokter juga akan meminta pasien untuk mencari cara untuk mengendalikan stres, tidur dengan kepala lebih tinggi, dan menjalankan usaha untuk mengendalikan berat badan. Apakah penyakit dispepsia itu berbahaya? Dispepsia, termasuk dispepsia fungsional, dapat menjadi kronis dengan gejala yang menyiksa. Jika tidak ditangani, dispepsia tentu dapat berbahaya dan mengganggu kehidupan pasien. Segera hubungi dokter apabila Anda merasakan gejala dispepsia, terlebih jika tidak merespons obat-obatan yang dijual bebas. Catatan dari SehatQ Dispepsia fungsional adalah kumpulan gejala pada saluran pencernaan bagian atas yang tidak diketahui penyebabnya. Dispepsia fungsional dapat ditangani dengan kombinasi obat-obatan dan perubahan gaya hidup. Jika masih memiliki pertanyaan terkait dispepsia fungsional, Anda bisa menanyakan ke dokter di aplikasi kesehatan keluarga SehatQ. Aplikasi SehatQ bisa diunduh gratis di Appstore dan Playstore yang berikan informasi penyakit terpercaya." # generate summary input_ids = tokenizer.encode(ARTICLE_TO_SUMMARIZE, return_tensors='pt') summary_ids = model.generate(input_ids, max_length=100, num_beams=2, repetition_penalty=2.5, length_penalty=1.0, early_stopping=True, no_repeat_ngram_size=2, use_cache=True) summary_text = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print(summary_text) ``` Output: ``` 'Dispepsia fungsional adalah kumpulan gejala tanpa sebab pada saluran pencernaan bagian atas. Gejala tersebut dapat berupa rasa sakit, nyeri, dan tak nyaman pada perut bagian atas. Penderita dispepsia fungsional juga akan merasakan kenyang lebih cepat dan sensasi perut penuh berkepanjangan. Gejala-gejala tersebut bisa berlangsung selama sebulan atau lebih. ``` ## Acknowledgement Thanks to Immanuel Drexel for his article [Text Summarization, Extractive, T5, Bahasa Indonesia, Huggingface’s Transformers](https://medium.com/analytics-vidhya/text-summarization-t5-bahasa-indonesia-huggingfaces-transformers-ee9bfe368e2f)
laboro-ai/distilbert-base-japanese-finetuned-ddqa
laboro-ai
2020-12-18T03:10:13Z
5
1
transformers
[ "transformers", "pytorch", "distilbert", "question-answering", "ja", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: ja tags: - distilbert license: cc-by-nc-4.0 ---
laboro-ai/distilbert-base-japanese-finetuned-livedoor
laboro-ai
2020-12-18T03:09:54Z
3
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "ja", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: ja tags: - distilbert license: cc-by-nc-4.0 ---
sibt-rj/albert-large-urdu
sibt-rj
2020-12-16T20:27:42Z
6
0
transformers
[ "transformers", "pytorch", "albert", "fill-mask", "urdu", "language-model", "ur", "dataset:urdu-text-news", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - ur tags: - urdu - language-model license: mit datasets: - urdu-text-news ---
jnz/electra-ka
jnz
2020-12-12T21:53:36Z
46
2
transformers
[ "transformers", "pytorch", "electra", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
### electra-ka is first of its kind, Transformer based, open source Georgian language model. The model is trained on 33GB of Georgian text collected from 4854621 pages in commoncrowl archive.
ashwani-tanwar/Gujarati-in-Devanagari-XLM-R-Base
ashwani-tanwar
2020-12-12T02:22:48Z
4
0
transformers
[ "transformers", "pytorch", "tf", "xlm-roberta", "fill-mask", "gu", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: gu --- # Gujarati-in-Devanagari-XLM-R-Base This model is finetuned over [XLM-RoBERTa](https://huggingface.co/xlm-roberta-base) (XLM-R) using its base variant with the Gujarati language using the [OSCAR](https://oscar-corpus.com/) monolingual dataset. We converted the Gujarati script to the Devanagari using [Indic-NLP](https://github.com/anoopkunchukuttan/indic_nlp_library) library. For example, the sentence 'અમદાવાદ એ ગુજરાતનું એક શહેર છે.' was converted to 'अमदावाद ए गुजरातनुं एक शहेर छे.'. This helped to get better contextualised representations for some words as the XLM-R was pre-trained with several languages written in Devanagari script such as Hindi, Marathi, Sanskrit, and so on. We used the same masked language modelling (MLM) objective which was used for pretraining the XLM-R. As it is built over the pretrained XLM-R, we leveraged *Transfer Learning* by exploiting the knowledge from its parent model. ## Dataset OSCAR corpus contains several diverse datasets for different languages. We followed the work of [CamemBERT](https://www.aclweb.org/anthology/2020.acl-main.645/) who reported better performance with this diverse dataset as compared to the other large homogenous datasets. ## Preprocessing and Training Procedure Please visit [this link](https://github.com/ashwanitanwar/nmt-transfer-learning-xlm-r#6-finetuning-xlm-r) for the detailed procedure. ## Usage - This model can be used for further finetuning for different NLP tasks using the Gujarati language. - It can be used to generate contextualised word representations for the Gujarati words. - It can be used for domain adaptation. - It can be used to predict the missing words from the Gujarati sentences. ## Demo ### Using the model to predict missing words ``` from transformers import pipeline unmasker = pipeline('fill-mask', model='ashwani-tanwar/Gujarati-in-Devanagari-XLM-R-Base') pred_word = unmasker("अमदावाद ए गुजरातनुं एक <mask> छे.") print(pred_word) ``` ``` [{'sequence': '<s> अमदावाद ए गुजरातनुं एक नगर छे.</s>', 'score': 0.24843722581863403, 'token': 18576, 'token_str': '▁नगर'}, {'sequence': '<s> अमदावाद ए गुजरातनुं एक महानगर छे.</s>', 'score': 0.21455222368240356, 'token': 122519, 'token_str': '▁महानगर'}, {'sequence': '<s> अमदावाद ए गुजरातनुं एक राज्य छे.</s>', 'score': 0.16832049190998077, 'token': 10665, 'token_str': '▁राज्य'}, {'sequence': '<s> अमदावाद ए गुजरातनुं एक जिल्ला छे.</s>', 'score': 0.06764694303274155, 'token': 20396, 'token_str': '▁जिल्ला'}, {'sequence': '<s> अमदावाद ए गुजरातनुं एक शहर छे.</s>', 'score': 0.05364946648478508, 'token': 22770, 'token_str': '▁शहर'}] ``` ### Using the model to generate contextualised word representations ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("ashwani-tanwar/Gujarati-in-Devanagari-XLM-R-Base") model = AutoModel.from_pretrained("ashwani-tanwar/Gujarati-in-Devanagari-XLM-R-Base") sentence = "अमदावाद ए गुजरातनुं एक शहेर छे." encoded_sentence = tokenizer(sentence, return_tensors='pt') context_word_rep = model(**encoded_sentence) ```
ashwani-tanwar/Gujarati-XLM-R-Large
ashwani-tanwar
2020-12-12T01:39:10Z
12
0
transformers
[ "transformers", "pytorch", "tf", "xlm-roberta", "fill-mask", "gu", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: gu --- # Gujarati-XLM-R-Large This model is finetuned over [XLM-RoBERTa](https://huggingface.co/xlm-roberta-large) (XLM-R) using its large variant with the Gujarati language using the [OSCAR](https://oscar-corpus.com/) monolingual dataset. We used the same masked language modelling (MLM) objective which was used for pretraining the XLM-R. As it is built over the pretrained XLM-R, we leveraged *Transfer Learning* by exploiting the knowledge from its parent model. ## Dataset OSCAR corpus contains several diverse datasets for different languages. We followed the work of [CamemBERT](https://www.aclweb.org/anthology/2020.acl-main.645/) who reported better performance with this diverse dataset as compared to the other large homogenous datasets. ## Preprocessing and Training Procedure Please visit [this link](https://github.com/ashwanitanwar/nmt-transfer-learning-xlm-r#6-finetuning-xlm-r) for the detailed procedure. ## Usage - This model can be used for further finetuning for different NLP tasks using the Gujarati language. - It can be used to generate contextualised word representations for the Gujarati words. - It can be used for domain adaptation. - It can be used to predict the missing words from the Gujarati sentences. ## Demo ### Using the model to predict missing words ``` from transformers import pipeline unmasker = pipeline('fill-mask', model='ashwani-tanwar/Gujarati-XLM-R-Large') pred_word = unmasker("અમદાવાદ એ ગુજરાતનું એક <mask> છે.") print(pred_word) ``` ``` [{'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક શહેર છે.</s>', 'score': 0.9790881276130676, 'token': 85227, 'token_str': '▁શહેર'}, {'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક રાજ્ય છે.</s>', 'score': 0.004246668424457312, 'token': 63678, 'token_str': '▁રાજ્ય'}, {'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક ગામ છે.</s>', 'score': 0.0038021174259483814, 'token': 66346, 'token_str': '▁ગામ'}, {'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક મહત્વ છે.</s>', 'score': 0.002798238070681691, 'token': 126763, 'token_str': '▁મહત્વ'}, {'sequence': '<s> અમદાવાદ એ ગુજરાતનું એક અમદાવાદ છે.</s>', 'score': 0.0021192911081016064, 'token': 69499, 'token_str': '▁અમદાવાદ'}] ``` ### Using the model to generate contextualised word representations ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("ashwani-tanwar/Gujarati-XLM-R-Large") model = AutoModel.from_pretrained("ashwani-tanwar/Gujarati-XLM-R-Large") sentence = "અમદાવાદ એ ગુજરાતનું એક શહેર છે." encoded_sentence = tokenizer(sentence, return_tensors='pt') context_word_rep = model(**encoded_sentence) ```
flexudy/t5-base-multi-sentence-doctor
flexudy
2020-12-11T23:33:25Z
47
45
transformers
[ "transformers", "pytorch", "tf", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
![avatar](sent-banner.png) # Sentence-Doctor Sentence doctor is a T5 model that attempts to correct the errors or mistakes found in sentences. Model works on English, German and French text. ## 1. Problem: Many NLP models depend on tasks like *Text Extraction Libraries, OCR, Speech to Text libraries* and **Sentence Boundary Detection** As a consequence errors caused by these tasks in your NLP pipeline can affect the quality of models in applications. Especially since models are often trained on **clean** input. ## 2. Solution: Here we provide a model that **attempts** to reconstruct sentences based on the its context (sourrounding text). The task is pretty straightforward: * `Given an "erroneous" sentence, and its context, reconstruct the "intended" sentence`. ## 3. Use Cases: * Attempt to repair noisy sentences that where extracted with OCR software or text extractors. * Attempt to repair sentence boundaries. * Example (in German): **Input: "und ich bin im**", * Prefix_Context: "Hallo! Mein Name ist John", Postfix_Context: "Januar 1990 geboren." * Output: "John und ich bin im Jahr 1990 geboren" * Possibly sentence level spelling correction -- Although this is not the intended use. * Input: "I went to church **las yesteday**" => Output: "I went to church last Sunday". ## 4. Disclaimer Note how we always emphises on the word *attempt*. The current version of the model was only trained on **150K** sentences from the tatoeba dataset: https://tatoeba.org/eng. (50K per language -- En, Fr, De). Hence, we strongly encourage you to finetune the model on your dataset. We might release a version trained on more data. ## 5. Datasets We generated synthetic data from the tatoeba dataset: https://tatoeba.org/eng. Randomly applying different transformations on words and characters based on some probabilities. The datasets are available in the data folder (where **sentence_doctor_dataset_300K** is a larger dataset with 100K sentences for each language). ## 6. Usage ### 6.1 Preprocessing * Let us assume we have the following text (Note that there are no punctuation marks in the text): ```python text = "That is my job I am a medical doctor I save lives" ``` * You decided extract the sentences and for some obscure reason, you obtained these sentences: ```python sentences = ["That is my job I a", "m a medical doct", "I save lives"] ``` * You now wish to correct the sentence **"m a medical doct"**. Here is the single preprocessing step for the model: ```python input_text = "repair_sentence: " + sentences[1] + " context: {" + sentences[0] + "}{" + sentences[2] + "} </s>" ``` **Explanation**:</br> * We are telling the model to repair the sentence with the prefix "repair_sentence: " * Then append the sentence we want to repair **sentence[1]** which is "m a medical doct" * Next we give some context to the model. In the case, the context is some text that occured before the sentence and some text that appeard after the sentence in the original text. * To do that, we append the keyword "context :" * Append **{sentence[0]}** "{That is my job I a}". (Note how it is sourrounded by curly braces). * Append **{sentence[2]}** "{I save lives}". * At last we tell the model this is the end of the input with </s>. ```python print(input_text) # repair_sentence: m a medical doct context: {That is my job I a}{or I save lives} </s> ``` <br/> **The context is optional**, so the input could also be ```repair_sentence: m a medical doct context: {}{} </s>``` ### 6.2 Inference ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("flexudy/t5-base-multi-sentence-doctor") model = AutoModelWithLMHead.from_pretrained("flexudy/t5-base-multi-sentence-doctor") input_text = "repair_sentence: m a medical doct context: {That is my job I a}{or I save lives} </s>" input_ids = tokenizer.encode(input_text, return_tensors="pt") outputs = model.generate(input_ids, max_length=32, num_beams=1) sentence = tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True) assert sentence == "I am a medical doctor." ``` ## 7. Fine-tuning We also provide a script `train_any_t5_task.py` that might help you fine-tune any Text2Text Task with T5. We added #TODO comments all over to help you use train with ease. For example: ```python # TODO Set your training epochs config.TRAIN_EPOCHS = 3 ``` If you don't want to read the #TODO comments, just pass in your data like this ```python # TODO Where is your data ? Enter the path trainer.start("data/sentence_doctor_dataset_300.csv") ``` and voila!! Please feel free to correct any mistakes in the code and make a pull request. ## 8. Attribution * [Huggingface](https://huggingface.co/) transformer lib for making this possible * Abhishek Kumar Mishra's transformer [tutorial](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb) on text summarisation. Our training code is just a modified version of their code. So many thanks. * We finetuned this model from the huggingface hub: WikinewsSum/t5-base-multi-combine-wiki-news. Thanks to the [authors](https://huggingface.co/WikinewsSum) * We also read a lot of work from [Suraj Patil](https://github.com/patil-suraj) * No one has been forgotten, hopefully :)
zanelim/singbert-lite-sg
zanelim
2020-12-11T22:05:08Z
44
3
transformers
[ "transformers", "pytorch", "tf", "albert", "pretraining", "singapore", "sg", "singlish", "malaysia", "ms", "manglish", "albert-base-v2", "en", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en tags: - singapore - sg - singlish - malaysia - ms - manglish - albert-base-v2 license: mit datasets: - reddit singapore, malaysia - hardwarezone widget: - text: "dont play [MASK] leh" - text: "die [MASK] must try" --- # Model name SingBert Lite - Bert for Singlish (SG) and Manglish (MY). ## Model description Similar to [SingBert](https://huggingface.co/zanelim/singbert) but the lite-version, which was initialized from [Albert base v2](https://github.com/google-research/albert#albert), with pre-training finetuned on [singlish](https://en.wikipedia.org/wiki/Singlish) and [manglish](https://en.wikipedia.org/wiki/Manglish) data. ## Intended uses & limitations #### How to use ```python >>> from transformers import pipeline >>> nlp = pipeline('fill-mask', model='zanelim/singbert-lite-sg') >>> nlp("die [MASK] must try") [{'sequence': '[CLS] die die must try[SEP]', 'score': 0.7731555700302124, 'token': 1327, 'token_str': '▁die'}, {'sequence': '[CLS] die also must try[SEP]', 'score': 0.04763784259557724, 'token': 67, 'token_str': '▁also'}, {'sequence': '[CLS] die still must try[SEP]', 'score': 0.01859409362077713, 'token': 174, 'token_str': '▁still'}, {'sequence': '[CLS] die u must try[SEP]', 'score': 0.015824034810066223, 'token': 287, 'token_str': '▁u'}, {'sequence': '[CLS] die is must try[SEP]', 'score': 0.011271446943283081, 'token': 25, 'token_str': '▁is'}] >>> nlp("dont play [MASK] leh") [{'sequence': '[CLS] dont play play leh[SEP]', 'score': 0.4365769624710083, 'token': 418, 'token_str': '▁play'}, {'sequence': '[CLS] dont play punk leh[SEP]', 'score': 0.06880936771631241, 'token': 6769, 'token_str': '▁punk'}, {'sequence': '[CLS] dont play game leh[SEP]', 'score': 0.051739856600761414, 'token': 250, 'token_str': '▁game'}, {'sequence': '[CLS] dont play games leh[SEP]', 'score': 0.045703962445259094, 'token': 466, 'token_str': '▁games'}, {'sequence': '[CLS] dont play around leh[SEP]', 'score': 0.013458190485835075, 'token': 140, 'token_str': '▁around'}] >>> nlp("catch no [MASK]") [{'sequence': '[CLS] catch no ball[SEP]', 'score': 0.6197211146354675, 'token': 1592, 'token_str': '▁ball'}, {'sequence': '[CLS] catch no balls[SEP]', 'score': 0.08441998809576035, 'token': 7152, 'token_str': '▁balls'}, {'sequence': '[CLS] catch no joke[SEP]', 'score': 0.0676785409450531, 'token': 8186, 'token_str': '▁joke'}, {'sequence': '[CLS] catch no?[SEP]', 'score': 0.040638409554958344, 'token': 60, 'token_str': '?'}, {'sequence': '[CLS] catch no one[SEP]', 'score': 0.03546864539384842, 'token': 53, 'token_str': '▁one'}] >>> nlp("confirm plus [MASK]") [{'sequence': '[CLS] confirm plus chop[SEP]', 'score': 0.9608421921730042, 'token': 17144, 'token_str': '▁chop'}, {'sequence': '[CLS] confirm plus guarantee[SEP]', 'score': 0.011784233152866364, 'token': 9120, 'token_str': '▁guarantee'}, {'sequence': '[CLS] confirm plus confirm[SEP]', 'score': 0.010571340098977089, 'token': 10265, 'token_str': '▁confirm'}, {'sequence': '[CLS] confirm plus egg[SEP]', 'score': 0.0033525123726576567, 'token': 6387, 'token_str': '▁egg'}, {'sequence': '[CLS] confirm plus bet[SEP]', 'score': 0.0008760977652855217, 'token': 5676, 'token_str': '▁bet'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AlbertTokenizer, AlbertModel tokenizer = AlbertTokenizer.from_pretrained('zanelim/singbert-lite-sg') model = AlbertModel.from_pretrained("zanelim/singbert-lite-sg") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import AlbertTokenizer, TFAlbertModel tokenizer = AlbertTokenizer.from_pretrained("zanelim/singbert-lite-sg") model = TFAlbertModel.from_pretrained("zanelim/singbert-lite-sg") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` #### Limitations and bias This model was finetuned on colloquial Singlish and Manglish corpus, hence it is best applied on downstream tasks involving the main constituent languages- english, mandarin, malay. Also, as the training data is mainly from forums, beware of existing inherent bias. ## Training data Colloquial singlish and manglish (both are a mixture of English, Mandarin, Tamil, Malay, and other local dialects like Hokkien, Cantonese or Teochew) corpus. The corpus is collected from subreddits- `r/singapore` and `r/malaysia`, and forums such as `hardwarezone`. ## Training procedure Initialized with [albert base v2](https://github.com/google-research/albert#albert) vocab and checkpoints (pre-trained weights). Pre-training was further finetuned on training data with the following hyperparameters * train_batch_size: 4096 * max_seq_length: 128 * num_train_steps: 125000 * num_warmup_steps: 5000 * learning_rate: 0.00176 * hardware: TPU v3-8
yuvraj/xSumm
yuvraj
2020-12-11T22:05:01Z
10
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "summarization", "extreme summarization", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- language: "en" tags: - summarization - extreme summarization --- ​ ## Model description ​ BartForConditionalGenerationModel for extreme summarization- creates a one line abstractive summary of a given article ​ ## How to use ​ PyTorch model available ​ ```python from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline ​ tokenizer = AutoTokenizer.from_pretrained("yuvraj/xSumm") model = AutoModelWithLMHead.from_pretrained("yuvraj/xSumm") ​ xsumm = pipeline('summarization', model=model, tokenizer=tokenizer) xsumm("<text to be summarized>") ​ ## Limitations and bias Trained on a small fraction of the xsumm training dataset
yuvraj/summarizer-cnndm
yuvraj
2020-12-11T22:04:58Z
8
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "summarization", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- language: "en" tags: - summarization --- ​ # Summarization ​ ## Model description ​ BartForConditionalGeneration model fine tuned for summarization on 10000 samples from the cnn-dailymail dataset ​ ## How to use ​ PyTorch model available ​ ```python from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline ​ tokenizer = AutoTokenizer.from_pretrained("yuvraj/summarizer-cnndm") AutoModelWithLMHead.from_pretrained("yuvraj/summarizer-cnndm") ​ summarizer = pipeline('summarization', model=model, tokenizer=tokenizer) summarizer("<Text to be summarized>") ​ ## Limitations and bias Trained on a small dataset
valhalla/t5-base-squad
valhalla
2020-12-11T22:03:51Z
13
3
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
# T5 for question-answering This is T5-base model fine-tuned on SQuAD1.1 for QA using text-to-text approach ## Model training This model was trained on colab TPU with 35GB RAM for 4 epochs ## Results: | Metric | #Value | |-------------|---------| | Exact Match | 81.5610 | | F1 | 89.9601 | ## Model in Action 🚀 ``` from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("valhalla/t5-base-squad") model = AutoModelWithLMHead.from_pretrained("valhalla/t5-base-squad") def get_answer(question, context): input_text = "question: %s context: %s </s>" % (question, context) features = tokenizer([input_text], return_tensors='pt') out = model.generate(input_ids=features['input_ids'], attention_mask=features['attention_mask']) return tokenizer.decode(out[0]) context = "In Norse mythology, Valhalla is a majestic, enormous hall located in Asgard, ruled over by the god Odin." question = "What is Valhalla ?" get_answer(question, context) # output: 'a majestic, enormous hall located in Asgard, ruled over by the god Odin' ``` Play with this model [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1a5xpJiUjZybfU9Mi-aDkOp116PZ9-wni?usp=sharing) > Created by Suraj Patil [![Github icon](https://cdn0.iconfinder.com/data/icons/octicons/1024/mark-github-32.png)](https://github.com/patil-suraj/) [![Twitter icon](https://cdn0.iconfinder.com/data/icons/shift-logotypes/32/Twitter-32.png)](https://twitter.com/psuraj28)
valhalla/t5-base-qa-qg-hl
valhalla
2020-12-11T22:03:44Z
593
18
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question-generation", "dataset:squad", "arxiv:1910.10683", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- datasets: - squad tags: - question-generation widget: - text: "generate question: <hl> 42 <hl> is the answer to life, the universe and everything. </s>" - text: "question: What is 42 context: 42 is the answer to life, the universe and everything. </s>" license: mit --- ## T5 for multi-task QA and QG This is multi-task [t5-base](https://arxiv.org/abs/1910.10683) model trained for question answering and answer aware question generation tasks. For question generation the answer spans are highlighted within the text with special highlight tokens (`<hl>`) and prefixed with 'generate question: '. For QA the input is processed like this `question: question_text context: context_text </s>` You can play with the model using the inference API. Here's how you can use it For QG `generate question: <hl> 42 <hl> is the answer to life, the universe and everything. </s>` For QA `question: What is 42 context: 42 is the answer to life, the universe and everything. </s>` For more deatils see [this](https://github.com/patil-suraj/question_generation) repo. ### Model in action 🚀 You'll need to clone the [repo](https://github.com/patil-suraj/question_generation). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb) ```python3 from pipelines import pipeline nlp = pipeline("multitask-qa-qg", model="valhalla/t5-base-qa-qg-hl") # to generate questions simply pass the text nlp("42 is the answer to life, the universe and everything.") => [{'answer': '42', 'question': 'What is the answer to life, the universe and everything?'}] # for qa pass a dict with "question" and "context" nlp({ "question": "What is 42 ?", "context": "42 is the answer to life, the universe and everything." }) => 'the answer to life, the universe and everything' ```
twmkn9/distilbert-base-uncased-squad2
twmkn9
2020-12-11T22:03:01Z
184
4
transformers
[ "transformers", "pytorch", "distilbert", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
This model is [Distilbert base uncased](https://huggingface.co/distilbert-base-uncased) trained on SQuAD v2 as: ``` export SQUAD_DIR=../../squad2 python3 run_squad.py --model_type distilbert --model_name_or_path distilbert-base-uncased --do_train --do_eval --overwrite_cache --do_lower_case --version_2_with_negative --save_steps 100000 --train_file $SQUAD_DIR/train-v2.0.json --predict_file $SQUAD_DIR/dev-v2.0.json --per_gpu_train_batch_size 8 --num_train_epochs 3 --learning_rate 3e-5 --max_seq_length 384 --doc_stride 128 --output_dir ./tmp/distilbert_fine_tuned/ ``` Performance on a dev subset is close to the original paper: ``` Results: { 'exact': 64.88976637051661, 'f1': 68.1776176526635, 'total': 6078, 'HasAns_exact': 69.7594501718213, 'HasAns_f1': 76.62665295288285, 'HasAns_total': 2910, 'NoAns_exact': 60.416666666666664, 'NoAns_f1': 60.416666666666664, 'NoAns_total': 3168, 'best_exact': 64.88976637051661, 'best_exact_thresh': 0.0, 'best_f1': 68.17761765266337, 'best_f1_thresh': 0.0 } ``` We are hopeful this might save you time, energy, and compute. Cheers!
tuner007/pegasus_qa
tuner007
2020-12-11T22:02:48Z
38
2
transformers
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
# Pegasus for question-answering Pegasus model fine-tuned for QA using text-to-text approach ## Model in Action 🚀 ``` import torch from transformers import PegasusForConditionalGeneration, PegasusTokenizer model_name = 'tuner007/pegasus_qa' torch_device = 'cuda' if torch.cuda.is_available() else 'cpu' tokenizer = PegasusTokenizer.from_pretrained(model_name) model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device) def get_answer(question, context): input_text = "question: %s text: %s" % (question,context) batch = tokenizer.prepare_seq2seq_batch([input_text], truncation=True, padding='longest', return_tensors="pt").to(torch_device) translated = model.generate(**batch) tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True) return tgt_text[0] ``` #### Example: ``` context = "PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow." question = "How many customers were affected by the shutoffs?" get_answer(question, context) # output: '800 thousand' ``` > Created by Arpit Rajauria [![Twitter icon](https://cdn0.iconfinder.com/data/icons/shift-logotypes/32/Twitter-32.png)](https://twitter.com/arpit_rajauria)
tblard/tf-allocine
tblard
2020-12-11T22:02:40Z
1,820
10
transformers
[ "transformers", "tf", "camembert", "text-classification", "fr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: fr --- # tf-allociné A french sentiment analysis model, based on [CamemBERT](https://camembert-model.fr/), and finetuned on a large-scale dataset scraped from [Allociné.fr](http://www.allocine.fr/) user reviews. ## Results | Validation Accuracy | Validation F1-Score | Test Accuracy | Test F1-Score | |--------------------:| -------------------:| -------------:|--------------:| | 97.39 | 97.36 | 97.44 | 97.34 | The dataset and the evaluation code are available on [this repo](https://github.com/TheophileBlard/french-sentiment-analysis-with-bert). ## Usage ```python from transformers import AutoTokenizer, TFAutoModelForSequenceClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("tblard/tf-allocine") model = TFAutoModelForSequenceClassification.from_pretrained("tblard/tf-allocine") nlp = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer) print(nlp("Alad'2 est clairement le meilleur film de l'année 2018.")) # POSITIVE print(nlp("Juste whoaaahouuu !")) # POSITIVE print(nlp("NUL...A...CHIER ! FIN DE TRANSMISSION.")) # NEGATIVE print(nlp("Je m'attendais à mieux de la part de Franck Dubosc !")) # NEGATIVE ``` ## Author Théophile Blard – :email: [email protected] If you use this work (code, model or dataset), please cite as: > Théophile Blard, French sentiment analysis with BERT, (2020), GitHub repository, <https://github.com/TheophileBlard/french-sentiment-analysis-with-bert>
squeezebert/squeezebert-uncased
squeezebert
2020-12-11T22:02:17Z
41,687
2
transformers
[ "transformers", "pytorch", "squeezebert", "arxiv:2006.11316", "arxiv:1904.00962", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
language: en license: bsd datasets: - bookcorpus - wikipedia --- # SqueezeBERT pretrained model This model, `squeezebert-uncased`, is a pretrained model for the English language using a masked language modeling (MLM) and Sentence Order Prediction (SOP) objective. SqueezeBERT was introduced in [this paper](https://arxiv.org/abs/2006.11316). This model is case-insensitive. The model architecture is similar to BERT-base, but with the pointwise fully-connected layers replaced with [grouped convolutions](https://blog.yani.io/filter-group-tutorial/). The authors found that SqueezeBERT is 4.3x faster than `bert-base-uncased` on a Google Pixel 3 smartphone. ## Pretraining ### Pretraining data - [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of thousands of unpublished books - [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) ### Pretraining procedure The model is pretrained using the Masked Language Model (MLM) and Sentence Order Prediction (SOP) tasks. (Author's note: If you decide to pretrain your own model, and you prefer to train with MLM only, that should work too.) From the SqueezeBERT paper: > We pretrain SqueezeBERT from scratch (without distillation) using the [LAMB](https://arxiv.org/abs/1904.00962) optimizer, and we employ the hyperparameters recommended by the LAMB authors: a global batch size of 8192, a learning rate of 2.5e-3, and a warmup proportion of 0.28. Following the LAMB paper's recommendations, we pretrain for 56k steps with a maximum sequence length of 128 and then for 6k steps with a maximum sequence length of 512. ## Finetuning The SqueezeBERT paper results from 2 approaches to finetuning the model: - "finetuning without bells and whistles" -- after pretraining the SqueezeBERT model, finetune it on each GLUE task - "finetuning with bells and whistles" -- after pretraining the SqueezeBERT model, finetune it on a MNLI with distillation from a teacher model. Then, use the MNLI-finetuned SqueezeBERT model as a student model to finetune on each of the other GLUE tasks (e.g. RTE, MRPC, …) with distillation from a task-specific teacher model. A detailed discussion of the hyperparameters used for finetuning is provided in the appendix of the [SqueezeBERT paper](https://arxiv.org/abs/2006.11316). Note that finetuning SqueezeBERT with distillation is not yet implemented in this repo. If the author (Forrest Iandola - [email protected]) gets enough encouragement from the user community, he will add example code to Transformers for finetuning SqueezeBERT with distillation. This model, `squeezebert/squeezebert-uncased`, has been pretrained but not finetuned. For most text classification tasks, we recommend using squeezebert-mnli-headless as a starting point. ### How to finetune To try finetuning SqueezeBERT on the [MRPC](https://www.microsoft.com/en-us/download/details.aspx?id=52398) text classification task, you can run the following command: ``` ./utils/download_glue_data.py python examples/text-classification/run_glue.py \ --model_name_or_path squeezebert-base-headless \ --task_name mrpc \ --data_dir ./glue_data/MRPC \ --output_dir ./models/squeezebert_mrpc \ --overwrite_output_dir \ --do_train \ --do_eval \ --num_train_epochs 10 \ --learning_rate 3e-05 \ --per_device_train_batch_size 16 \ --save_steps 20000 ``` ## BibTeX entry and citation info ``` @article{2020_SqueezeBERT, author = {Forrest N. Iandola and Albert E. Shaw and Ravi Krishna and Kurt W. Keutzer}, title = {{SqueezeBERT}: What can computer vision teach NLP about efficient neural networks?}, journal = {arXiv:2006.11316}, year = {2020} } ```
spentaur/yelp
spentaur
2020-12-11T22:02:07Z
76
1
transformers
[ "transformers", "tf", "distilbert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# DistilBERT Yelp Review Sentiment This model is used for sentiment analysis on english yelp reviews. It is a DistilBERT model trained on 1 million reviews from the yelp open dataset. It is a regression model, with outputs in the range of ~-2 to ~2. With -2 being 1 star and 2 being 5 stars. It was trained using the [ktrain](https://github.com/amaiya/ktrain) because of it's ease of use. Example use: ``` tokenizer = AutoTokenizer.from_pretrained( 'distilbert-base-uncased', use_fast=True) model = TFAutoModelForSequenceClassification.from_pretrained( "spentaur/yelp") review = "This place is great!" input_ids = tokenizer.encode(review, return_tensors='tf') pred = model(input_ids)[0][0][0].numpy() # pred should === 1.9562385 ```
ramsrigouthamg/t5_paraphraser
ramsrigouthamg
2020-12-11T22:00:04Z
24,441
13
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
## Model in Action 🚀 ```python import torch from transformers import T5ForConditionalGeneration,T5Tokenizer def set_seed(seed): torch.manual_seed(seed) if torch.cuda.is_available(): torch.cuda.manual_seed_all(seed) set_seed(42) model = T5ForConditionalGeneration.from_pretrained('ramsrigouthamg/t5_paraphraser') tokenizer = T5Tokenizer.from_pretrained('ramsrigouthamg/t5_paraphraser') device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print ("device ",device) model = model.to(device) sentence = "Which course should I take to get started in data science?" # sentence = "What are the ingredients required to bake a perfect cake?" # sentence = "What is the best possible approach to learn aeronautical engineering?" # sentence = "Do apples taste better than oranges in general?" text = "paraphrase: " + sentence + " </s>" max_len = 256 encoding = tokenizer.encode_plus(text,pad_to_max_length=True, return_tensors="pt") input_ids, attention_masks = encoding["input_ids"].to(device), encoding["attention_mask"].to(device) # set top_k = 50 and set top_p = 0.95 and num_return_sequences = 3 beam_outputs = model.generate( input_ids=input_ids, attention_mask=attention_masks, do_sample=True, max_length=256, top_k=120, top_p=0.98, early_stopping=True, num_return_sequences=10 ) print ("\nOriginal Question ::") print (sentence) print ("\n") print ("Paraphrased Questions :: ") final_outputs =[] for beam_output in beam_outputs: sent = tokenizer.decode(beam_output, skip_special_tokens=True,clean_up_tokenization_spaces=True) if sent.lower() != sentence.lower() and sent not in final_outputs: final_outputs.append(sent) for i, final_output in enumerate(final_outputs): print("{}: {}".format(i, final_output)) ``` ## Output ``` Original Question :: Which course should I take to get started in data science? Paraphrased Questions :: 0: What should I learn to become a data scientist? 1: How do I get started with data science? 2: How would you start a data science career? 3: How can I start learning data science? 4: How do you get started in data science? 5: What's the best course for data science? 6: Which course should I start with for data science? 7: What courses should I follow to get started in data science? 8: What degree should be taken by a data scientist? 9: Which course should I follow to become a Data Scientist? ``` ## Detailed blog post available here : https://towardsdatascience.com/paraphrase-any-question-with-t5-text-to-text-transfer-transformer-pretrained-model-and-cbb9e35f1555
patrickvonplaten/roberta2roberta-share-cnn_dailymail-fp16
patrickvonplaten
2020-12-11T21:59:26Z
4
0
transformers
[ "transformers", "pytorch", "encoder_decoder", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
# Shared Roberta2Roberta Summarization with 🤗 EncoderDecoder Framework This model is a shared Roberta2Roberta model, meaning that the encoder and decoder weights are tied, fine-tuned on summarization. Roberta2Roberta is a `EncoderDecoderModel`, meaning that both the encoder and the decoder are `roberta-base` RoBERTa models. In this setup the encoder and decoder weights are tied. Leveraging the [EncoderDecoderFramework](https://huggingface.co/transformers/model_doc/encoderdecoder.html#encoder-decoder-models), the two pretrained models can simply be loaded into the framework via: ```python roberta2roberta = EncoderDecoderModel.from_encoder_decoder_pretrained("roberta-base", "roberta-base", tie_encoder_decoder=True) ``` The decoder of an `EncoderDecoder` model needs cross-attention layers and usually makes use of causal masking for auto-regressiv generation. Thus, ``roberta2roberta`` is consequently fined-tuned on the `CNN/Daily Mail`dataset and the resulting model `roberta2roberta-share-cnn_dailymail-fp16` is uploaded here. ## Example The model is by no means a state-of-the-art model, but nevertheless produces reasonable summarization results. It was mainly fine-tuned as a proof-of-concept for the 🤗 EncoderDecoder Framework. The model can be used as follows: ```python from transformers import RobertaTokenizer, EncoderDecoderModel model = EncoderDecoderModel.from_pretrained("patrickvonplaten/roberta2roberta-share-cnn_dailymail-fp16") tokenizer = RobertaTokenizer.from_pretrained("roberta-base") article = """(CNN)Sigma Alpha Epsilon is under fire for a video showing party-bound fraternity members singing a racist chant. SAE's national chapter suspended the students, but University of Oklahoma President David B oren took it a step further, saying the university's affiliation with the fraternity is permanently done. The news is shocking, but it's not the first time SAE has faced controversy. SAE was founded March 9, 185 6, at the University of Alabama, five years before the American Civil War, according to the fraternity website. When the war began, the group had fewer than 400 members, of which "369 went to war for the Confede rate States and seven for the Union Army," the website says. The fraternity now boasts more than 200,000 living alumni, along with about 15,000 undergraduates populating 219 chapters and 20 "colonies" seeking fu ll membership at universities. SAE has had to work hard to change recently after a string of member deaths, many blamed on the hazing of new recruits, SAE national President Bradley Cohen wrote in a message on t he fraternity's website. The fraternity's website lists more than 130 chapters cited or suspended for "health and safety incidents" since 2010. At least 30 of the incidents involved hazing, and dozens more invol ved alcohol. However, the list is missing numerous incidents from recent months. Among them, according to various media outlets: Yale University banned the SAEs from campus activities last month after members al legedly tried to interfere with a sexual misconduct investigation connected to an initiation rite. Stanford University in December suspended SAE housing privileges after finding sorority members attending a frat ernity function were subjected to graphic sexual content. And Johns Hopkins University in November suspended the fraternity for underage drinking. "The media has labeled us as the 'nation's deadliest fraternity, ' " Cohen said. In 2011, for example, a student died while being coerced into excessive alcohol consumption, according to a lawsuit. SAE's previous insurer dumped the fraternity. "As a result, we are paying Lloy d's of London the highest insurance rates in the Greek-letter world," Cohen said. Universities have turned down SAE's attempts to open new chapters, and the fraternity had to close 12 in 18 months over hazing in cidents.""" input_ids = tokenizer(article, return_tensors="pt").input_ids output_ids = model.generate(input_ids) print(tokenizer.decode(output_ids[0], skip_special_tokens=True)) # should produce # SAE's national chapter suspended after video shows party-bound fraternity members singing racist chant. University of Oklahoma president says university's affiliation with fraternity is permanently done. # SAE has had to close 12 chapters since 2010 after members were killed in hazing. The fraternity has had more than 130 chapters in 18 months. ``` ## Training script: **IMPORTANT**: In order for this code to work, make sure you checkout to the branch [more_general_trainer_metric](https://github.com/huggingface/transformers/tree/more_general_trainer_metric), which slightly adapts the `Trainer` for `EncoderDecoderModels` according to this PR: https://github.com/huggingface/transformers/pull/5840. The following code shows the complete training script that was used to fine-tune `roberta2roberta-cnn_dailymail-fp16 ` for reproducability. The training last ~9h on a standard GPU. ```python #!/usr/bin/env python3 import nlp import logging from transformers import RobertaTokenizer, EncoderDecoderModel, Trainer, TrainingArguments logging.basicConfig(level=logging.INFO) model = EncoderDecoderModel.from_encoder_decoder_pretrained("roberta-base", "roberta-base", tie_encoder_decoder=True) tokenizer = RobertaTokenizer.from_pretrained("roberta-base") # load train and validation data train_dataset = nlp.load_dataset("cnn_dailymail", "3.0.0", split="train") val_dataset = nlp.load_dataset("cnn_dailymail", "3.0.0", split="validation[:5%]") # load rouge for validation rouge = nlp.load_metric("rouge", experiment_id=0) # set decoding params model.config.decoder_start_token_id = tokenizer.bos_token_id model.config.eos_token_id = tokenizer.eos_token_id model.config.max_length = 142 model.config.min_length = 56 model.config.no_repeat_ngram_size = 3 model.early_stopping = True model.length_penalty = 2.0 model.num_beams = 4 encoder_length = 512 decoder_length = 128 batch_size = 16 # map data correctly def map_to_encoder_decoder_inputs(batch): # Tokenizer will automatically set [BOS] <text> [EOS] # cut off at Longformer at 2048 inputs = tokenizer(batch["article"], padding="max_length", truncation=True, max_length=encoder_length) # force summarization <= 256 outputs = tokenizer(batch["highlights"], padding="max_length", truncation=True, max_length=decoder_length) batch["input_ids"] = inputs.input_ids batch["attention_mask"] = inputs.attention_mask batch["decoder_input_ids"] = outputs.input_ids batch["labels"] = outputs.input_ids.copy() # mask loss for padding batch["labels"] = [ [-100 if token == tokenizer.pad_token_id else token for token in labels] for labels in batch["labels"] ] batch["decoder_attention_mask"] = outputs.attention_mask assert all([len(x) == encoder_length for x in inputs.input_ids]) assert all([len(x) == decoder_length for x in outputs.input_ids]) return batch def compute_metrics(pred): labels_ids = pred.label_ids pred_ids = pred.predictions # all unnecessary tokens are removed pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True) labels_ids[labels_ids == -100] = tokenizer.eos_token_id label_str = tokenizer.batch_decode(labels_ids, skip_special_tokens=True) rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=["rouge2"])["rouge2"].mid return { "rouge2_precision": round(rouge_output.precision, 4), "rouge2_recall": round(rouge_output.recall, 4), "rouge2_fmeasure": round(rouge_output.fmeasure, 4), } # make train dataset ready train_dataset = train_dataset.map( map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=["article", "highlights"], ) train_dataset.set_format( type="torch", columns=["input_ids", "attention_mask", "decoder_attention_mask", "decoder_input_ids", "labels"], ) # same for validation dataset val_dataset = val_dataset.map( map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=["article", "highlights"], ) val_dataset.set_format( type="torch", columns=["input_ids", "decoder_attention_mask", "attention_mask", "decoder_input_ids", "labels"], ) # set training arguments - these params are not really tuned, feel free to change training_args = TrainingArguments( output_dir="./", per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, predict_from_generate=True, evaluate_during_training=True, do_train=True, do_eval=True, logging_steps=1000, save_steps=1000, eval_steps=1000, overwrite_output_dir=True, warmup_steps=2000, save_total_limit=3, fp16=True, ) # instantiate trainer trainer = Trainer( model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=train_dataset, eval_dataset=val_dataset, ) # start training trainer.train() ``` ## Evaluation The following script evaluates the model on the test set of CNN/Daily Mail. ```python #!/usr/bin/env python3 import nlp from transformers import RobertaTokenizer, EncoderDecoderModel tokenizer = RobertaTokenizer.from_pretrained("roberta-base") model = EncoderDecoderModel.from_pretrained("patrickvonplaten/roberta2roberta-share-cnn_dailymail-fp16") model.to("cuda") test_dataset = nlp.load_dataset("cnn_dailymail", "3.0.0", split="test") batch_size = 128 # map data correctly def generate_summary(batch): # Tokenizer will automatically set [BOS] <text> [EOS] # cut off at BERT max length 512 inputs = tokenizer(batch["article"], padding="max_length", truncation=True, max_length=512, return_tensors="pt") input_ids = inputs.input_ids.to("cuda") attention_mask = inputs.attention_mask.to("cuda") outputs = model.generate(input_ids, attention_mask=attention_mask) # all special tokens including will be removed output_str = tokenizer.batch_decode(outputs, skip_special_tokens=True) batch["pred"] = output_str return batch results = test_dataset.map(generate_summary, batched=True, batch_size=batch_size, remove_columns=["article"]) # load rouge for validation rouge = nlp.load_metric("rouge") pred_str = results["pred"] label_str = results["highlights"] rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=["rouge2"])["rouge2"].mid print(rouge_output) ``` The obtained results should be: | - | Rouge2 - mid -precision | Rouge2 - mid - recall | Rouge2 - mid - fmeasure | |----------|:-------------:|:------:|:------:| | **CNN/Daily Mail** | 15.6 | 18.79 | **16.59** |
patrickvonplaten/roberta2roberta-cnn_dailymail-fp16
patrickvonplaten
2020-12-11T21:59:23Z
18
0
transformers
[ "transformers", "pytorch", "encoder_decoder", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
# Roberta2Roberta Summarization with 🤗 EncoderDecoder Framework This model is a Roberta2Roberta model fine-tuned on summarization. Roberta2Roberta is a `EncoderDecoderModel`, meaning that both the encoder and the decoder are `roberta-base` RoBERTa models. Leveraging the [EncoderDecoderFramework](https://huggingface.co/transformers/model_doc/encoderdecoder.html#encoder-decoder-models), the two pretrained models can simply be loaded into the framework via: ```python roberta2roberta = EncoderDecoderModel.from_encoder_decoder_pretrained("roberta-base", "roberta-base") ``` The decoder of an `EncoderDecoder` model needs cross-attention layers and usually makes use of causal masking for auto-regressiv generation. Thus, ``roberta2roberta`` is consequently fined-tuned on the `CNN/Daily Mail`dataset and the resulting model `roberta2roberta-cnn_dailymail-fp16` is uploaded here. ## Example The model is by no means a state-of-the-art model, but nevertheless produces reasonable summarization results. It was mainly fine-tuned as a proof-of-concept for the 🤗 EncoderDecoder Framework. The model can be used as follows: ```python from transformers import BertTokenizer, EncoderDecoderModel model = EncoderDecoderModel.from_pretrained("patrickvonplaten/roberta2roberta-cnn_dailymail-fp16") tokenizer = RobertaTokenizer.from_pretrained("roberta-base") article = """(CNN)Sigma Alpha Epsilon is under fire for a video showing party-bound fraternity members singing a racist chant. SAE's national chapter suspended the students, but University of Oklahoma President David B oren took it a step further, saying the university's affiliation with the fraternity is permanently done. The news is shocking, but it's not the first time SAE has faced controversy. SAE was founded March 9, 185 6, at the University of Alabama, five years before the American Civil War, according to the fraternity website. When the war began, the group had fewer than 400 members, of which "369 went to war for the Confede rate States and seven for the Union Army," the website says. The fraternity now boasts more than 200,000 living alumni, along with about 15,000 undergraduates populating 219 chapters and 20 "colonies" seeking fu ll membership at universities. SAE has had to work hard to change recently after a string of member deaths, many blamed on the hazing of new recruits, SAE national President Bradley Cohen wrote in a message on t he fraternity's website. The fraternity's website lists more than 130 chapters cited or suspended for "health and safety incidents" since 2010. At least 30 of the incidents involved hazing, and dozens more invol ved alcohol. However, the list is missing numerous incidents from recent months. Among them, according to various media outlets: Yale University banned the SAEs from campus activities last month after members al legedly tried to interfere with a sexual misconduct investigation connected to an initiation rite. Stanford University in December suspended SAE housing privileges after finding sorority members attending a frat ernity function were subjected to graphic sexual content. And Johns Hopkins University in November suspended the fraternity for underage drinking. "The media has labeled us as the 'nation's deadliest fraternity, ' " Cohen said. In 2011, for example, a student died while being coerced into excessive alcohol consumption, according to a lawsuit. SAE's previous insurer dumped the fraternity. "As a result, we are paying Lloy d's of London the highest insurance rates in the Greek-letter world," Cohen said. Universities have turned down SAE's attempts to open new chapters, and the fraternity had to close 12 in 18 months over hazing in cidents.""" input_ids = tokenizer(article, return_tensors="pt").input_ids output_ids = model.generate(input_ids) print(tokenizer.decode(output_ids[0], skip_special_tokens=True)) # should produce # Sigma Alpha Epsilon is under fire for a video showing party-bound fraternity members singing racist chants. The fraternity's national chapter has had to close 12 in 18 months over hazing. # Sigma has had more than 130 chapters in 18 states. University of Oklahoma president says fraternity has been "deteriorated". ``` ## Training script: **IMPORTANT**: In order for this code to work, make sure you checkout to the branch [more_general_trainer_metric](https://github.com/huggingface/transformers/tree/more_general_trainer_metric), which slightly adapts the `Trainer` for `EncoderDecoderModels` according to this PR: https://github.com/huggingface/transformers/pull/5840. The following code shows the complete training script that was used to fine-tune `roberta2roberta-cnn_dailymail-fp16 ` for reproducability. The training last ~9h on a standard GPU. ```python #!/usr/bin/env python3 import nlp import logging from transformers import RobertaTokenizer, EncoderDecoderModel, Trainer, TrainingArguments logging.basicConfig(level=logging.INFO) model = EncoderDecoderModel.from_encoder_decoder_pretrained("roberta-base", "roberta-base") tokenizer = RobertaTokenizer.from_pretrained("roberta-base") # load train and validation data train_dataset = nlp.load_dataset("cnn_dailymail", "3.0.0", split="train") val_dataset = nlp.load_dataset("cnn_dailymail", "3.0.0", split="validation[:5%]") # load rouge for validation rouge = nlp.load_metric("rouge", experiment_id=0) # set decoding params model.config.decoder_start_token_id = tokenizer.bos_token_id model.config.eos_token_id = tokenizer.eos_token_id model.config.max_length = 142 model.config.min_length = 56 model.config.no_repeat_ngram_size = 3 model.early_stopping = True model.length_penalty = 2.0 model.num_beams = 4 encoder_length = 512 decoder_length = 128 batch_size = 16 # map data correctly def map_to_encoder_decoder_inputs(batch): # Tokenizer will automatically set [BOS] <text> [EOS] # cut off at Longformer at 2048 inputs = tokenizer(batch["article"], padding="max_length", truncation=True, max_length=encoder_length) # force summarization <= 256 outputs = tokenizer(batch["highlights"], padding="max_length", truncation=True, max_length=decoder_length) batch["input_ids"] = inputs.input_ids batch["attention_mask"] = inputs.attention_mask batch["decoder_input_ids"] = outputs.input_ids batch["labels"] = outputs.input_ids.copy() # mask loss for padding batch["labels"] = [ [-100 if token == tokenizer.pad_token_id else token for token in labels] for labels in batch["labels"] ] batch["decoder_attention_mask"] = outputs.attention_mask assert all([len(x) == encoder_length for x in inputs.input_ids]) assert all([len(x) == decoder_length for x in outputs.input_ids]) return batch def compute_metrics(pred): labels_ids = pred.label_ids pred_ids = pred.predictions # all unnecessary tokens are removed pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True) labels_ids[labels_ids == -100] = tokenizer.eos_token_id label_str = tokenizer.batch_decode(labels_ids, skip_special_tokens=True) rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=["rouge2"])["rouge2"].mid return { "rouge2_precision": round(rouge_output.precision, 4), "rouge2_recall": round(rouge_output.recall, 4), "rouge2_fmeasure": round(rouge_output.fmeasure, 4), } # make train dataset ready train_dataset = train_dataset.map( map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=["article", "highlights"], ) train_dataset.set_format( type="torch", columns=["input_ids", "attention_mask", "decoder_attention_mask", "decoder_input_ids", "labels"], ) # same for validation dataset val_dataset = val_dataset.map( map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=["article", "highlights"], ) val_dataset.set_format( type="torch", columns=["input_ids", "decoder_attention_mask", "attention_mask", "decoder_input_ids", "labels"], ) # set training arguments - these params are not really tuned, feel free to change training_args = TrainingArguments( output_dir="./", per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, predict_from_generate=True, evaluate_during_training=True, do_train=True, do_eval=True, logging_steps=1000, save_steps=1000, eval_steps=1000, overwrite_output_dir=True, warmup_steps=2000, save_total_limit=3, fp16=True, ) # instantiate trainer trainer = Trainer( model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=train_dataset, eval_dataset=val_dataset, ) # start training trainer.train() ``` ## Evaluation The following script evaluates the model on the test set of CNN/Daily Mail. ```python #!/usr/bin/env python3 import nlp from transformers import RobertaTokenizer, EncoderDecoderModel tokenizer = RobertaTokenizer.from_pretrained("roberta-base") model = EncoderDecoderModel.from_pretrained("patrickvonplaten/roberta2roberta-cnn_dailymail-fp16") model.to("cuda") test_dataset = nlp.load_dataset("cnn_dailymail", "3.0.0", split="test") batch_size = 128 # map data correctly def generate_summary(batch): # Tokenizer will automatically set [BOS] <text> [EOS] # cut off at BERT max length 512 inputs = tokenizer(batch["article"], padding="max_length", truncation=True, max_length=512, return_tensors="pt") input_ids = inputs.input_ids.to("cuda") attention_mask = inputs.attention_mask.to("cuda") outputs = model.generate(input_ids, attention_mask=attention_mask) # all special tokens including will be removed output_str = tokenizer.batch_decode(outputs, skip_special_tokens=True) batch["pred"] = output_str return batch results = test_dataset.map(generate_summary, batched=True, batch_size=batch_size, remove_columns=["article"]) # load rouge for validation rouge = nlp.load_metric("rouge") pred_str = results["pred"] label_str = results["highlights"] rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=["rouge2"])["rouge2"].mid print(rouge_output) ``` The obtained results should be: | - | Rouge2 - mid -precision | Rouge2 - mid - recall | Rouge2 - mid - fmeasure | |----------|:-------------:|:------:|:------:| | **CNN/Daily Mail** | 15.79 | 19.05 | **16.79** |
mrm8488/t5-small-finetuned-emotion
mrm8488
2020-12-11T21:56:24Z
11
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "dataset:emotion", "arxiv:1910.10683", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - emotion --- # T5-small fine-tuned for Emotion Recognition 😂😢😡😃😯 [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) [small](https://huggingface.co/t5-small) fine-tuned on [emotion recognition](https://github.com/dair-ai/emotion_dataset) dataset for **Emotion Recognition** downstream task. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the downstream task (Sentiment Recognition) - Dataset 📚 [Elvis Saravia](https://twitter.com/omarsar0) has gathered a great [dataset](https://github.com/dair-ai/emotion_dataset) for emotion recognition. It allows to classifiy the text into one of the following **6** emotions: - sadness 😢 - joy 😃 - love 🥰 - anger 😡 - fear 😱 - surprise 😯 ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this Colab Notebook](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) created by [Suraj Patil](https://github.com/patil-suraj), so all credits to him! ## Test set metrics 🧾 | |precision | recall | f1-score |support| |----------|----------|---------|----------|-------| |anger | 0.92| 0.93| 0.92| 275| |fear | 0.90| 0.90| 0.90| 224| |joy | 0.97| 0.91| 0.94| 695| |love | 0.75| 0.89| 0.82| 159| |sadness | 0.96| 0.97| 0.96| 581| |surpirse | 0.73| 0.80| 0.76| 66| | | |accuracy| | | 0.92| 2000| |macro avg| 0.87| 0.90| 0.88| 2000| |weighted avg| 0.93| 0.92| 0.92| 2000| Confusion Matrix ![CM](https://i.imgur.com/JBtAwPx.png) ## Model in Action 🚀 ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-small-finetuned-emotion") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-small-finetuned-emotion") def get_emotion(text): input_ids = tokenizer.encode(text + '</s>', return_tensors='pt') output = model.generate(input_ids=input_ids, max_length=2) dec = [tokenizer.decode(ids) for ids in output] label = dec[0] return label get_emotion("i feel as if i havent blogged in ages are at least truly blogged i am doing an update cute") # Output: 'joy' get_emotion("i have a feeling i kinda lost my best friend") # Output: 'sadness' ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/t5-base-finetuned-wikiSQL-sql-to-en
mrm8488
2020-12-11T21:56:17Z
35
12
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "dataset:wikisql", "arxiv:1910.10683", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - wikisql --- # T5-base fine-tuned on WikiSQL for SQL to English translation [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [WikiSQL](https://github.com/salesforce/WikiSQL) for **SQL** to **English** **translation** task. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the Dataset 📚 Dataset ID: ```wikisql``` from [Huggingface/NLP](https://huggingface.co/nlp/viewer/?dataset=wikisql) | Dataset | Split | # samples | | -------- | ----- | --------- | | wikisql | train | 56355 | | wikisql | valid | 14436 | How to load it from [nlp](https://github.com/huggingface/nlp) ```python train_dataset = nlp.load_dataset('wikisql', split=nlp.Split.TRAIN) valid_dataset = nlp.load_dataset('wikisql', split=nlp.Split.VALIDATION) ``` Check out more about this dataset and others in [NLP Viewer](https://huggingface.co/nlp/viewer/) ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this Colab Notebook](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) created by [Suraj Patil](https://github.com/patil-suraj), so all credits to him! ## Model in Action 🚀 ```python from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-wikiSQL-sql-to-en") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-wikiSQL-sql-to-en") def get_explanation(query): input_text = "translate Sql to English: %s </s>" % query features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'], attention_mask=features['attention_mask']) return tokenizer.decode(output[0]) query = "SELECT COUNT Params form model where location=HF-Hub" get_explanation(query) # output: 'How many parameters form model for HF-hub?' ``` Play with it in a Colab: <img src="https://camo.githubusercontent.com/52feade06f2fecbf006889a904d221e6a730c194/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667" alt="Open In Colab" data-canonical-src="https://colab.research.google.com/assets/colab-badge.svg"> > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/t5-base-finetuned-qasc
mrm8488
2020-12-11T21:55:50Z
30
5
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "dataset:qasc", "arxiv:1910.10683", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - qasc --- # T5-base fine-tuned on QASC [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [QASC](https://allenai.org/data/qasc) for **QA** (via *sentence composition*) downstream task. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the dataset 📚 **Question Answering via Sentence Composition** (QASC) is a question-answering dataset with a focus on sentence composition. It consists of 9,980 8-way multiple-choice questions about grade school science (8,134 train, 926 dev, 920 test), and comes with a corpus of 17M sentences. ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this awesome one](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) by [Suraj Patil](https://twitter.com/psuraj28). The **context** passed to the *encoder* is the combination of the 2 *facts* (`fact1` and `fact2`). The **question** is just the `formatted_question` field. The **answer** passed to the *decoder* is the`text` right answer instead of the `label` (A, B, C... See `choices` field). More details about the dataset format/fields [here](https://huggingface.co/nlp/viewer/?dataset=qasc) ## Metrics on validation set 📋 | Metric | Score | |--------|-------| |Accuracy (EM) | **97.73**| ## Model in Action 🚀 ```python from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-qasc") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-qasc") def get_response(question, context, max_length=64): input_text = 'question: %s context: %s' % (question, context) features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'], attention_mask=features['attention_mask'], max_length=max_length) return tokenizer.decode(output[0]) fact_1 = 'a watch is used for measuring time' fact_2 = 'Times are measured in seconds.' context = fact_1 + ' ' + fact_2 question = 'What can be used to measure seconds? (A) Watch (B) seconds (C) fluid (D) Ruler (E) goggles (F) glasses (G) Drill (H) Scale' get_response(question, context) # output: 'Watch' ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/electricidad-base-generator
mrm8488
2020-12-11T21:54:10Z
7
3
transformers
[ "transformers", "pytorch", "electra", "fill-mask", "es", "arxiv:1406.2661", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: es thumbnail: https://i.imgur.com/uxAvBfh.png widget: - text: "Madrid es una ciudad muy [MASK] en España." --- ## ELECTRICIDAD: The Spanish Electra [Imgur](https://imgur.com/uxAvBfh) **Electricidad-base-generator** (uncased) is a ```base``` Electra like model (generator in this case) trained on a + 20 GB of the [OSCAR](https://oscar-corpus.com/) Spanish corpus. As mentioned in the original [paper](https://openreview.net/pdf?id=r1xMH1BtvB): **ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. For a detailed description and experimental results, please refer the paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB). ## Fast example of usage 🚀 ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="mrm8488/electricidad-base-generator", tokenizer="mrm8488/electricidad-base-generator" ) print( fill_mask(f"HuggingFace está creando {fill_mask.tokenizer.mask_token} que la comunidad usa para resolver tareas de NLP.") ) # Output: [{'sequence': '[CLS] huggingface esta creando herramientas que la comunidad usa para resolver tareas de nlp. [SEP]', 'score': 0.0896105170249939, 'token': 8760, 'token_str': 'herramientas'}, ...] ``` ## Acknowledgments I thank [🤗/transformers team](https://github.com/huggingface/transformers) for allowing me to train the model (specially to [Julien Chaumond](https://twitter.com/julien_c)). > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/electra-small-finetuned-squadv1
mrm8488
2020-12-11T21:53:59Z
7
0
transformers
[ "transformers", "pytorch", "electra", "question-answering", "en", "arxiv:1406.2661", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: en --- # Electra small ⚡ + SQuAD v1 ❓ [Electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) fine-tuned on [SQUAD v1.1 dataset](https://rajpurkar.github.io/SQuAD-explorer/explore/1.1/dev/) for **Q&A** downstream task. ## Details of the downstream task (Q&A) - Model 🧠 **ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. ## Details of the downstream task (Q&A) - Dataset 📚 **S**tanford **Q**uestion **A**nswering **D**ataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. SQuAD v1.1 contains **100,000+** question-answer pairs on **500+** articles. ## Model training 🏋️‍ The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command: ```bash python transformers/examples/question-answering/run_squad.py \ --model_type electra \ --model_name_or_path 'google/electra-small-discriminator' \ --do_eval \ --do_train \ --do_lower_case \ --train_file '/content/dataset/train-v1.1.json' \ --predict_file '/content/dataset/dev-v1.1.json' \ --per_gpu_train_batch_size 16 \ --learning_rate 3e-5 \ --num_train_epochs 10 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir '/content/output' \ --overwrite_output_dir \ --save_steps 1000 ``` ## Test set Results 🧾 | Metric | # Value | | ------ | --------- | | **EM** | **77.70** | | **F1** | **85.74** | | **Size**| **50 MB** | Very good metrics for such a "small" model! ```json { 'exact': 77.70104068117313, 'f1': 85.73991234187997, 'total': 10570, 'HasAns_exact': 77.70104068117313, 'HasAns_f1': 85.73991234187997, 'HasAns_total': 10570, 'best_exact': 77.70104068117313, 'best_exact_thresh': 0.0, 'best_f1': 85.73991234187997, 'best_f1_thresh': 0.0 } ``` ### Model in action 🚀 Fast usage with **pipelines**: ```python from transformers import pipeline QnA_pipeline = pipeline('question-answering', model='mrm8488/electra-small-finetuned-squadv1') QnA_pipeline({ 'context': 'A new strain of flu that has the potential to become a pandemic has been identified in China by scientists.', 'question': 'What has been discovered by scientists from China ?' }) # Output: {'answer': 'A new strain of flu', 'end': 19, 'score': 0.7950334108113424, 'start': 0} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/electra-base-finetuned-squadv1
mrm8488
2020-12-11T21:53:55Z
4
0
transformers
[ "transformers", "pytorch", "electra", "question-answering", "en", "arxiv:1406.2661", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: en --- # Electra base ⚡ + SQuAD v1 ❓ [Electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) fine-tuned on [SQUAD v1.1 dataset](https://rajpurkar.github.io/SQuAD-explorer/explore/1.1/dev/) for **Q&A** downstream task. ## Details of the downstream task (Q&A) - Model 🧠 **ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. ## Details of the downstream task (Q&A) - Dataset 📚 **S**tanford **Q**uestion **A**nswering **D**ataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. SQuAD v1.1 contains **100,000+** question-answer pairs on **500+** articles. ## Model training 🏋️‍ The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command: ```bash python transformers/examples/question-answering/run_squad.py \ --model_type electra \ --model_name_or_path 'google/electra-base-discriminator' \ --do_eval \ --do_train \ --do_lower_case \ --train_file '/content/dataset/train-v1.1.json' \ --predict_file '/content/dataset/dev-v1.1.json' \ --per_gpu_train_batch_size 16 \ --learning_rate 3e-5 \ --num_train_epochs 10 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir '/content/output' \ --overwrite_output_dir \ --save_steps 1000 ``` ## Test set Results 🧾 | Metric | # Value | | ------ | --------- | | **EM** | **83.03** | | **F1** | **90.77** | | **Size**| **+ 400 MB** | Very good metrics for such a "small" model! ```json { 'exact': 83.03689687795648, 'f1': 90.77486052446231, 'total': 10570, 'HasAns_exact': 83.03689687795648, 'HasAns_f1': 90.77486052446231, 'HasAns_total': 10570, 'best_exact': 83.03689687795648, 'best_exact_thresh': 0.0, 'best_f1': 90.77486052446231, 'best_f1_thresh': 0.0 } ``` ### Model in action 🚀 Fast usage with **pipelines**: ```python from transformers import pipeline QnA_pipeline = pipeline('question-answering', model='mrm8488/electra-base-finetuned-squadv1') QnA_pipeline({ 'context': 'A new strain of flu that has the potential to become a pandemic has been identified in China by scientists.', 'question': 'What has been discovered by scientists from China ?' }) # Output: {'answer': 'A new strain of flu', 'end': 19, 'score': 0.9995211430099182, 'start': 0} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/distilbert-multi-finetuned-for-xqua-on-tydiqa
mrm8488
2020-12-11T21:53:48Z
51
0
transformers
[ "transformers", "pytorch", "distilbert", "question-answering", "multilingual", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: multilingual thumbnail: --- # DistilBERT multilingual fine-tuned on TydiQA (GoldP task) dataset for multilingual Q&A 😛🌍❓ ## Details of the language model [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) ## Details of the Tydi QA dataset TyDi QA contains 200k human-annotated question-answer pairs in 11 Typologically Diverse languages, written without seeing the answer and without the use of translation, and is designed for the **training and evaluation** of automatic question answering systems. This repository provides evaluation code and a baseline system for the dataset. https://ai.google.com/research/tydiqa ## Details of the downstream task (Gold Passage or GoldP aka the secondary task) Given a passage that is guaranteed to contain the answer, predict the single contiguous span of characters that answers the question. the gold passage task differs from the [primary task](https://github.com/google-research-datasets/tydiqa/blob/master/README.md#the-tasks) in several ways: * only the gold answer passage is provided rather than the entire Wikipedia article; * unanswerable questions have been discarded, similar to MLQA and XQuAD; * we evaluate with the SQuAD 1.1 metrics like XQuAD; and * Thai and Japanese are removed since the lack of whitespace breaks some tools. ## Model training 💪🏋️‍ The model was fine-tuned on a Tesla P100 GPU and 25GB of RAM. The script is the following: ```python python transformers/examples/question-answering/run_squad.py \ --model_type distilbert \ --model_name_or_path distilbert-base-multilingual-cased \ --do_train \ --do_eval \ --train_file /path/to/dataset/train.json \ --predict_file /path/to/dataset/dev.json \ --per_gpu_train_batch_size 24 \ --per_gpu_eval_batch_size 24 \ --learning_rate 3e-5 \ --num_train_epochs 5 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /content/model_output \ --overwrite_output_dir \ --save_steps 1000 \ --threads 400 ``` ## Global Results (dev set) 📝 | Metric | # Value | | --------- | ----------- | | **EM** | **63.85** | | **F1** | **75.70** | ## Specific Results (per language) 🌍📝 | Language | # Samples | # EM | # F1 | | --------- | ----------- |--------| ------ | | Arabic | 1314 | 66.66 | 80.02 | | Bengali | 180 | 53.09 | 63.50 | | English | 654 | 62.42 | 73.12 | | Finnish | 1031 | 64.57 | 75.15 | | Indonesian| 773 | 67.89 | 79.70 | | Korean | 414 | 51.29 | 61.73 | | Russian | 1079 | 55.42 | 70.08 | | Swahili | 596 | 74.51 | 81.15 | | Telegu | 874 | 66.21 | 79.85 | ## Similar models You can also try [bert-multi-cased-finedtuned-xquad-tydiqa-goldp](https://huggingface.co/mrm8488/bert-multi-cased-finedtuned-xquad-tydiqa-goldp) that achieves **F1 = 82.16** and **EM = 71.06** (And of course better marks per language). > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
m3hrdadfi/bert2bert-fa-wiki-summary
m3hrdadfi
2020-12-11T21:50:20Z
37
2
transformers
[ "transformers", "pytorch", "encoder-decoder", "text2text-generation", "summarization", "fa", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- language: fa license: apache-2.0 tags: - summarization --- A Bert2Bert model on the Wiki Summary dataset to summarize articles. The model achieved an 8.47 ROUGE-2 score. For more detail, please follow the [Wiki Summary](https://github.com/m3hrdadfi/wiki-summary) repo. ## Eval results The following table summarizes the ROUGE scores obtained by the Bert2Bert model. | % | Precision | Recall | FMeasure | |:-------:|:---------:|:------:|:--------:| | ROUGE-1 | 28.14 | 30.86 | 27.34 | | ROUGE-2 | 07.12 | 08.47* | 07.10 | | ROUGE-L | 28.49 | 25.87 | 25.50 | ## Questions? Post a Github issue on the [Wiki Summary](https://github.com/m3hrdadfi/wiki-summary/issues) repo.
loodos/electra-base-turkish-64k-uncased-discriminator
loodos
2020-12-11T21:49:26Z
7
0
transformers
[ "transformers", "pytorch", "tf", "electra", "pretraining", "tr", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: tr --- # Turkish Language Models with Huggingface's Transformers As R&D Team at Loodos, we release cased and uncased versions of most recent language models for Turkish. More details about pretrained models and evaluations on downstream tasks can be found [here (our repo)](https://github.com/Loodos/turkish-language-models). # Turkish ELECTRA-Base-discriminator (uncased/64k) This is ELECTRA-Base model's discriminator which has the same structure with BERT-Base trained on uncased Turkish dataset. This version has a vocab of size 64k, different from default 32k. ## Usage Using AutoModelWithLMHead and AutoTokenizer from Transformers, you can import the model as described below. ```python from transformers import AutoModel, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("loodos/electra-base-turkish-64k-uncased-discriminator", do_lower_case=False) model = AutoModelWithLMHead.from_pretrained("loodos/electra-base-turkish-64k-uncased-discriminator") normalizer = TextNormalization() normalized_text = normalizer.normalize(text, do_lower_case=True, is_turkish=True) tokenizer.tokenize(normalized_text) ``` ### Notes on Tokenizers Currently, Huggingface's tokenizers (which were written in Python) have a bug concerning letters "ı, i, I, İ" and non-ASCII Turkish specific letters. There are two reasons. 1- Vocabulary and sentence piece model is created with NFC/NFKC normalization but tokenizer uses NFD/NFKD. NFD/NFKD normalization changes text that contains Turkish characters I-ı, İ-i, Ç-ç, Ö-ö, Ş-ş, Ğ-ğ, Ü-ü. This causes wrong tokenization, wrong training and loss of information. Some tokens are never trained.(like "şanlıurfa", "öğün", "çocuk" etc.) NFD/NFKD normalization is not proper for Turkish. 2- Python's default ```string.lower()``` and ```string.upper()``` make the conversions - "I" and "İ" to 'i' - 'i' and 'ı' to 'I' respectively. However, in Turkish, 'I' and 'İ' are two different letters. We opened an [issue](https://github.com/huggingface/transformers/issues/6680) in Huggingface's github repo about this bug. Until it is fixed, in case you want to train your model with uncased data, we provide a simple text normalization module (`TextNormalization()` in the code snippet above) in our [repo](https://github.com/Loodos/turkish-language-models). ## Details and Contact You contact us to ask a question, open an issue or give feedback via our github [repo](https://github.com/Loodos/turkish-language-models). ## Acknowledgments Many thanks to TFRC Team for providing us cloud TPUs on Tensorflow Research Cloud to train our models.
loodos/albert-base-turkish-uncased
loodos
2020-12-11T21:49:21Z
50
1
transformers
[ "transformers", "pytorch", "tf", "albert", "tr", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: tr --- # Turkish Language Models with Huggingface's Transformers As R&D Team at Loodos, we release cased and uncased versions of most recent language models for Turkish. More details about pretrained models and evaluations on downstream tasks can be found [here (our repo)](https://github.com/Loodos/turkish-language-models). # Turkish ALBERT-Base (uncased) This is ALBERT-Base model which has 12 repeated encoder layers with 768 hidden layer size trained on uncased Turkish dataset. ## Usage Using AutoModel and AutoTokenizer from Transformers, you can import the model as described below. ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("loodos/albert-base-turkish-uncased", do_lower_case=False, keep_accents=True) model = AutoModel.from_pretrained("loodos/albert-base-turkish-uncased") normalizer = TextNormalization() normalized_text = normalizer.normalize(text, do_lower_case=True, is_turkish=True) tokenizer.tokenize(normalized_text) ``` ### Notes on Tokenizers Currently, Huggingface's tokenizers (which were written in Python) have a bug concerning letters "ı, i, I, İ" and non-ASCII Turkish specific letters. There are two reasons. 1- Vocabulary and sentence piece model is created with NFC/NFKC normalization but tokenizer uses NFD/NFKD. NFD/NFKD normalization changes text that contains Turkish characters I-ı, İ-i, Ç-ç, Ö-ö, Ş-ş, Ğ-ğ, Ü-ü. This causes wrong tokenization, wrong training and loss of information. Some tokens are never trained.(like "şanlıurfa", "öğün", "çocuk" etc.) NFD/NFKD normalization is not proper for Turkish. 2- Python's default ```string.lower()``` and ```string.upper()``` make the conversions - "I" and "İ" to 'i' - 'i' and 'ı' to 'I' respectively. However, in Turkish, 'I' and 'İ' are two different letters. We opened an [issue](https://github.com/huggingface/transformers/issues/6680) in Huggingface's github repo about this bug. Until it is fixed, in case you want to train your model with uncased data, we provide a simple text normalization module (`TextNormalization()` in the code snippet above) in our [repo](https://github.com/Loodos/turkish-language-models). ## Details and Contact You contact us to ask a question, open an issue or give feedback via our github [repo](https://github.com/Loodos/turkish-language-models). ## Acknowledgments Many thanks to TFRC Team for providing us cloud TPUs on Tensorflow Research Cloud to train our models.
krevas/finance-koelectra-base-generator
krevas
2020-12-11T21:48:30Z
3
0
transformers
[ "transformers", "pytorch", "electra", "fill-mask", "ko", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: ko --- # 📈 Financial Korean ELECTRA model Pretrained ELECTRA Language Model for Korean (`finance-koelectra-base-generator`) > ELECTRA is a new method for self-supervised language representation learning. It can be used to > pre-train transformer networks using relatively little compute. ELECTRA models are trained to > distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to > the discriminator of a GAN. More details about ELECTRA can be found in the [ICLR paper](https://openreview.net/forum?id=r1xMH1BtvB) or in the [official ELECTRA repository](https://github.com/google-research/electra) on GitHub. ## Stats The current version of the model is trained on a financial news data of Naver news. The final training corpus has a size of 25GB and 2.3B tokens. This model was trained a cased model on a TITAN RTX for 500k steps. ## Usage ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="krevas/finance-koelectra-base-generator", tokenizer="krevas/finance-koelectra-base-generator" ) print(fill_mask(f"내일 해당 종목이 대폭 {fill_mask.tokenizer.mask_token}할 것이다.")) ``` # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/krevas).
krevas/finance-koelectra-base-discriminator
krevas
2020-12-11T21:48:27Z
1
0
transformers
[ "transformers", "pytorch", "electra", "pretraining", "ko", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: ko --- # 📈 Financial Korean ELECTRA model Pretrained ELECTRA Language Model for Korean (`finance-koelectra-base-discriminator`) > ELECTRA is a new method for self-supervised language representation learning. It can be used to > pre-train transformer networks using relatively little compute. ELECTRA models are trained to > distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to > the discriminator of a GAN. More details about ELECTRA can be found in the [ICLR paper](https://openreview.net/forum?id=r1xMH1BtvB) or in the [official ELECTRA repository](https://github.com/google-research/electra) on GitHub. ## Stats The current version of the model is trained on a financial news data of Naver news. The final training corpus has a size of 25GB and 2.3B tokens. This model was trained a cased model on a TITAN RTX for 500k steps. ## Usage ```python from transformers import ElectraForPreTraining, ElectraTokenizer import torch discriminator = ElectraForPreTraining.from_pretrained("krevas/finance-koelectra-base-discriminator") tokenizer = ElectraTokenizer.from_pretrained("krevas/finance-koelectra-base-discriminator") sentence = "내일 해당 종목이 대폭 상승할 것이다" fake_sentence = "내일 해당 종목이 맛있게 상승할 것이다" fake_tokens = tokenizer.tokenize(fake_sentence) fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") discriminator_outputs = discriminator(fake_inputs) predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) [print("%7s" % token, end="") for token in fake_tokens] [print("%7s" % int(prediction), end="") for prediction in predictions.tolist()[1:-1]] print("fake token : %s" % fake_tokens[predictions.tolist()[1:-1].index(1)]) ``` # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/krevas).
jplu/tf-xlm-roberta-large
jplu
2020-12-11T21:48:04Z
144
1
transformers
[ "transformers", "tf", "xlm-roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# Tensorflow XLM-RoBERTa In this repository you will find different versions of the XLM-RoBERTa model for Tensorflow. ## XLM-RoBERTa [XLM-RoBERTa](https://ai.facebook.com/blog/-xlm-r-state-of-the-art-cross-lingual-understanding-through-self-supervision/) is a scaled cross lingual sentence encoder. It is trained on 2.5T of data across 100 languages data filtered from Common Crawl. XLM-R achieves state-of-the-arts results on multiple cross lingual benchmarks. ## Model Weights | Model | Downloads | -------------------------------- | --------------------------------------------------------------------------------------------------------------- | `jplu/tf-xlm-roberta-base` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-base/config.json) • [`tf_model.h5`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-base/tf_model.h5) | `jplu/tf-xlm-roberta-large` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-large/config.json) • [`tf_model.h5`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-large/tf_model.h5) ## Usage With Transformers >= 2.4 the Tensorflow models of XLM-RoBERTa can be loaded like: ```python from transformers import TFXLMRobertaModel model = TFXLMRobertaModel.from_pretrained("jplu/tf-xlm-roberta-base") ``` Or ``` model = TFXLMRobertaModel.from_pretrained("jplu/tf-xlm-roberta-large") ``` ## Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/jplu). ## Acknowledgments Thanks to all the Huggingface team for the support and their amazing library!
jplu/tf-xlm-roberta-base
jplu
2020-12-11T21:48:00Z
4,839
1
transformers
[ "transformers", "tf", "xlm-roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# Tensorflow XLM-RoBERTa In this repository you will find different versions of the XLM-RoBERTa model for Tensorflow. ## XLM-RoBERTa [XLM-RoBERTa](https://ai.facebook.com/blog/-xlm-r-state-of-the-art-cross-lingual-understanding-through-self-supervision/) is a scaled cross lingual sentence encoder. It is trained on 2.5T of data across 100 languages data filtered from Common Crawl. XLM-R achieves state-of-the-arts results on multiple cross lingual benchmarks. ## Model Weights | Model | Downloads | -------------------------------- | --------------------------------------------------------------------------------------------------------------- | `jplu/tf-xlm-roberta-base` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-base/config.json) • [`tf_model.h5`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-base/tf_model.h5) | `jplu/tf-xlm-roberta-large` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-large/config.json) • [`tf_model.h5`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-large/tf_model.h5) ## Usage With Transformers >= 2.4 the Tensorflow models of XLM-RoBERTa can be loaded like: ```python from transformers import TFXLMRobertaModel model = TFXLMRobertaModel.from_pretrained("jplu/tf-xlm-roberta-base") ``` Or ``` model = TFXLMRobertaModel.from_pretrained("jplu/tf-xlm-roberta-large") ``` ## Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/jplu). ## Acknowledgments Thanks to all the Huggingface team for the support and their amazing library!
indobenchmark/indobert-lite-large-p1
indobenchmark
2020-12-11T21:45:56Z
40
0
transformers
[ "transformers", "pytorch", "tf", "albert", "feature-extraction", "indobert", "indobenchmark", "indonlu", "id", "dataset:Indo4B", "arxiv:2009.05387", "license:mit", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: id tags: - indobert - indobenchmark - indonlu license: mit inference: false datasets: - Indo4B --- # IndoBERT-Lite Large Model (phase1 - uncased) [IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective. ## All Pre-trained Models | Model | #params | Arch. | Training data | |--------------------------------|--------------------------------|-------|-----------------------------------| | `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) | ## How to use ### Load model and tokenizer ```python from transformers import BertTokenizer, AutoModel tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-lite-large-p1") model = AutoModel.from_pretrained("indobenchmark/indobert-lite-large-p1") ``` ### Extract contextual representation ```python x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1) print(x, model(x)[0].sum()) ``` ## Authors <b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti. ## Citation If you use our work, please cite: ```bibtex @inproceedings{wilie2020indonlu, title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding}, author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti}, booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing}, year={2020} } ```
indobenchmark/indobert-lite-base-p2
indobenchmark
2020-12-11T21:45:53Z
35,934
0
transformers
[ "transformers", "pytorch", "tf", "albert", "feature-extraction", "indobert", "indobenchmark", "indonlu", "id", "dataset:Indo4B", "arxiv:2009.05387", "license:mit", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: id tags: - indobert - indobenchmark - indonlu license: mit inference: false datasets: - Indo4B --- # IndoBERT-Lite Base Model (phase2 - uncased) [IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective. ## All Pre-trained Models | Model | #params | Arch. | Training data | |--------------------------------|--------------------------------|-------|-----------------------------------| | `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) | ## How to use ### Load model and tokenizer ```python from transformers import BertTokenizer, AutoModel tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-lite-base-p2") model = AutoModel.from_pretrained("indobenchmark/indobert-lite-base-p2") ``` ### Extract contextual representation ```python x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1) print(x, model(x)[0].sum()) ``` ## Authors <b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti. ## Citation If you use our work, please cite: ```bibtex @inproceedings{wilie2020indonlu, title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding}, author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti}, booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing}, year={2020} } ```
indobenchmark/indobert-lite-base-p1
indobenchmark
2020-12-11T21:45:50Z
261
0
transformers
[ "transformers", "pytorch", "tf", "albert", "feature-extraction", "indobert", "indobenchmark", "indonlu", "id", "dataset:Indo4B", "arxiv:2009.05387", "license:mit", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: id tags: - indobert - indobenchmark - indonlu license: mit inference: false datasets: - Indo4B --- # IndoBERT-Lite Base Model (phase1 - uncased) [IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective. ## All Pre-trained Models | Model | #params | Arch. | Training data | |--------------------------------|--------------------------------|-------|-----------------------------------| | `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) | | `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) | ## How to use ### Load model and tokenizer ```python from transformers import BertTokenizer, AutoModel tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-lite-base-p1") model = AutoModel.from_pretrained("indobenchmark/indobert-lite-base-p1") ``` ### Extract contextual representation ```python x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1) print(x, model(x)[0].sum()) ``` ## Authors <b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti. ## Citation If you use our work, please cite: ```bibtex @inproceedings{wilie2020indonlu, title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding}, author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti}, booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing}, year={2020} } ```
facebook/rag-token-base
facebook
2020-12-11T21:39:44Z
7,396
17
transformers
[ "transformers", "pytorch", "rag", "en", "dataset:wiki_dpr", "arxiv:2005.11401", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en license: apache-2.0 datasets: - wiki_dpr thumbnail: https://huggingface.co/front/thumbnails/facebook.png --- ## RAG This is a non-finetuned version of the RAG-Token model of the the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/pdf/2005.11401.pdf) by Patrick Lewis, Ethan Perez, Aleksandara Piktus et al. Rag consits of a *question encoder*, *retriever* and a *generator*. The retriever should be a `RagRetriever` instance. The *question encoder* can be any model that can be loaded with `AutoModel` and the *generator* can be any model that can be loaded with `AutoModelForSeq2SeqLM`. This model is a non-finetuned RAG-Token model and was created as follows: ```python from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration, AutoTokenizer model = RagTokenForGeneration.from_pretrained_question_encoder_generator("facebook/dpr-question_encoder-single-nq-base", "facebook/bart-large") question_encoder_tokenizer = AutoTokenizer.from_pretrained("facebook/dpr-question_encoder-single-nq-base") generator_tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large") tokenizer = RagTokenizer(question_encoder_tokenizer, generator_tokenizer) model.config.use_dummy_dataset = True model.config.index_name = "exact" retriever = RagRetriever(model.config, question_encoder_tokenizer, generator_tokenizer) model.save_pretrained("./") tokenizer.save_pretrained("./") retriever.save_pretrained("./") ``` Note that the model is *uncased* so that all capital input letters are converted to lower-case. ## Usage: *Note*: the model uses the *dummy* retriever as a default. Better results are obtained by using the full retriever, by setting `config.index_name="legacy"` and `config.use_dummy_dataset=False`. The model can be fine-tuned as follows: ```python from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-base") retriever = RagRetriever.from_pretrained("facebook/rag-token-base") model = RagTokenForGeneration.from_pretrained("facebook/rag-token-base", retriever=retriever) input_dict = tokenizer.prepare_seq2seq_batch("who holds the record in 100m freestyle", "michael phelps", return_tensors="pt") outputs = model(input_dict["input_ids"], labels=input_dict["labels"]) loss = outputs.loss # train on loss ```
facebook/rag-sequence-base
facebook
2020-12-11T21:39:37Z
3,522
9
transformers
[ "transformers", "pytorch", "rag", "arxiv:2005.11401", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: apache-2.0 thumbnail: https://huggingface.co/front/thumbnails/facebook.png --- ## RAG This is a non-finetuned version of the RAG-Sequence model of the the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/pdf/2005.11401.pdf) by Patrick Lewis, Ethan Perez, Aleksandara Piktus et al. Rag consits of a *question encoder*, *retriever* and a *generator*. The retriever should be a `RagRetriever` instance. The *question encoder* can be any model that can be loaded with `AutoModel` and the *generator* can be any model that can be loaded with `AutoModelForSeq2SeqLM`. This model is a non-finetuned RAG-Sequence model and was created as follows: ```python from transformers import RagTokenizer, RagRetriever, RagSequenceForGeneration, AutoTokenizer model = RagSequenceForGeneration.from_pretrained_question_encoder_generator("facebook/dpr-question_encoder-single-nq-base", "facebook/bart-large") question_encoder_tokenizer = AutoTokenizer.from_pretrained("facebook/dpr-question_encoder-single-nq-base") generator_tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large") tokenizer = RagTokenizer(question_encoder_tokenizer, generator_tokenizer) model.config.use_dummy_dataset = True model.config.index_name = "exact" retriever = RagRetriever(model.config, question_encoder_tokenizer, generator_tokenizer) model.save_pretrained("./") tokenizer.save_pretrained("./") retriever.save_pretrained("./") ``` Note that the model is *uncased* so that all capital input letters are converted to lower-case. ## Usage: *Note*: the model uses the *dummy* retriever as a default. Better results are obtained by using the full retriever, by setting `config.index_name="legacy"` and `config.use_dummy_dataset=False`. The model can be fine-tuned as follows: ```python from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration tokenizer = RagTokenizer.from_pretrained("facebook/rag-sequence-base") retriever = RagRetriever.from_pretrained("facebook/rag-sequence-base") model = RagTokenForGeneration.from_pretrained("facebook/rag-sequence-base", retriever=retriever) input_dict = tokenizer.prepare_seq2seq_batch("who holds the record in 100m freestyle", "michael phelps", return_tensors="pt") outputs = model(input_dict["input_ids"], labels=input_dict["labels"]) loss = outputs.loss # train on loss ```
elgeish/cs224n-squad2.0-albert-base-v2
elgeish
2020-12-11T21:38:54Z
1,062
0
transformers
[ "transformers", "pytorch", "albert", "question-answering", "exbert", "arxiv:2004.07067", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - exbert --- ## CS224n SQuAD2.0 Project Dataset The goal of this model is to save CS224n students GPU time when establishing baselines to beat for the [Default Final Project](http://web.stanford.edu/class/cs224n/project/default-final-project-handout.pdf). The training set used to fine-tune this model is the same as the [official one](https://rajpurkar.github.io/SQuAD-explorer/); however, evaluation and model selection were performed using roughly half of the official dev set, 6078 examples, picked at random. The data files can be found at <https://github.com/elgeish/squad/tree/master/data> — this is the Winter 2020 version. Given that the official SQuAD2.0 dev set contains the project's test set, students must make sure not to use the official SQuAD2.0 dev set in any way — including the use of models fine-tuned on the official SQuAD2.0, since they used the official SQuAD2.0 dev set for model selection. <a href="https://huggingface.co/exbert/?model=elgeish/cs224n-squad2.0-albert-base-v2"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a> ## Results ```json { "exact": 78.94044093451794, "f1": 81.7724930324639, "total": 6078, "HasAns_exact": 76.28865979381443, "HasAns_f1": 82.20385314478195, "HasAns_total": 2910, "NoAns_exact": 81.37626262626263, "NoAns_f1": 81.37626262626263, "NoAns_total": 3168, "best_exact": 78.95689371503784, "best_exact_thresh": 0.0, "best_f1": 81.78894581298378, "best_f1_thresh": 0.0 } ``` ## Notable Arguments ```json { "do_lower_case": true, "doc_stride": 128, "fp16": false, "fp16_opt_level": "O1", "gradient_accumulation_steps": 24, "learning_rate": 3e-05, "max_answer_length": 30, "max_grad_norm": 1, "max_query_length": 64, "max_seq_length": 384, "model_name_or_path": "albert-base-v2", "model_type": "albert", "num_train_epochs": 3, "per_gpu_train_batch_size": 8, "save_steps": 5000, "seed": 42, "train_batch_size": 8, "version_2_with_negative": true, "warmup_steps": 0, "weight_decay": 0 } ``` ## Environment Setup ```json { "transformers": "2.5.1", "pytorch": "1.4.0=py3.6_cuda10.1.243_cudnn7.6.3_0", "python": "3.6.5=hc3d631a_2", "os": "Linux 4.15.0-1060-aws #62-Ubuntu SMP Tue Feb 11 21:23:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux", "gpu": "Tesla V100-SXM2-16GB" } ``` ## How to Cite ```BibTeX @misc{elgeish2020gestalt, title={Gestalt: a Stacking Ensemble for SQuAD2.0}, author={Mohamed El-Geish}, journal={arXiv e-prints}, archivePrefix={arXiv}, eprint={2004.07067}, year={2020}, } ``` ## Related Models * [elgeish/cs224n-squad2.0-albert-large-v2](https://huggingface.co/elgeish/cs224n-squad2.0-albert-large-v2) * [elgeish/cs224n-squad2.0-albert-xxlarge-v1](https://huggingface.co/elgeish/cs224n-squad2.0-albert-xxlarge-v1) * [elgeish/cs224n-squad2.0-distilbert-base-uncased](https://huggingface.co/elgeish/cs224n-squad2.0-distilbert-base-uncased) * [elgeish/cs224n-squad2.0-roberta-base](https://huggingface.co/elgeish/cs224n-squad2.0-roberta-base)
cooelf/limitbert
cooelf
2020-12-11T21:36:18Z
3
0
transformers
[ "transformers", "pytorch", "arxiv:1910.14296", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# LIMIT-BERT Code and model for the *EMNLP 2020 Findings* paper: [LIMIT-BERT: Linguistic Informed Multi-task BERT](https://arxiv.org/abs/1910.14296)) ## Contents 1. [Requirements](#Requirements) 2. [Training](#Training) ## Requirements * Python 3.6 or higher. * Cython 0.25.2 or any compatible version. * [PyTorch](http://pytorch.org/) 1.0.0+. * [EVALB](http://nlp.cs.nyu.edu/evalb/). Before starting, run `make` inside the `EVALB/` directory to compile an `evalb` executable. This will be called from Python for evaluation. * [pytorch-transformers](https://github.com/huggingface/pytorch-transformers) PyTorch 1.0.0+ or any compatible version. #### Pre-trained Models (PyTorch) The following pre-trained models are available for download from Google Drive: * [`LIMIT-BERT`](https://drive.google.com/open?id=1fm0cK2A91iLG3lCpwowCCQSALnWS2X4i): PyTorch version, same setting with BERT-Large-WWM,loading model with [pytorch-transformers](https://github.com/huggingface/pytorch-transformers). ## How to use ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("cooelf/limitbert") model = AutoModel.from_pretrained("cooelf/limitbert") ``` Please see our original repo for the training scripts. https://github.com/cooelf/LIMIT-BERT ## Training To train LIMIT-BERT, simply run: ``` sh run_limitbert.sh ``` ### Evaluation Instructions To test after setting model path: ``` sh test_bert.sh ``` ## Citation ``` @article{zhou2019limit, title={{LIMIT-BERT}: Linguistic informed multi-task {BERT}}, author={Zhou, Junru and Zhang, Zhuosheng and Zhao, Hai}, journal={arXiv preprint arXiv:1910.14296}, year={2019} } ```
almanach/camembert-base-ccnet
almanach
2020-12-11T21:35:15Z
63
1
transformers
[ "transformers", "pytorch", "camembert", "fr", "arxiv:1911.03894", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: fr --- # CamemBERT: a Tasty French Language Model ## Introduction [CamemBERT](https://arxiv.org/abs/1911.03894) is a state-of-the-art language model for French based on the RoBERTa model. It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains. For further information or requests, please go to [Camembert Website](https://camembert-model.fr/) ## Pre-trained models | Model | #params | Arch. | Training data | |--------------------------------|--------------------------------|-------|-----------------------------------| | `camembert-base` | 110M | Base | OSCAR (138 GB of text) | | `camembert/camembert-large` | 335M | Large | CCNet (135 GB of text) | | `camembert/camembert-base-ccnet` | 110M | Base | CCNet (135 GB of text) | | `camembert/camembert-base-wikipedia-4gb` | 110M | Base | Wikipedia (4 GB of text) | | `camembert/camembert-base-oscar-4gb` | 110M | Base | Subsample of OSCAR (4 GB of text) | | `camembert/camembert-base-ccnet-4gb` | 110M | Base | Subsample of CCNet (4 GB of text) | ## How to use CamemBERT with HuggingFace ##### Load CamemBERT and its sub-word tokenizer : ```python from transformers import CamembertModel, CamembertTokenizer # You can replace "camembert-base" with any other model from the table, e.g. "camembert/camembert-large". tokenizer = CamembertTokenizer.from_pretrained("camembert/camembert-base-ccnet") camembert = CamembertModel.from_pretrained("camembert/camembert-base-ccnet") camembert.eval() # disable dropout (or leave in train mode to finetune) ``` ##### Filling masks using pipeline ```python from transformers import pipeline camembert_fill_mask = pipeline("fill-mask", model="camembert/camembert-base-ccnet", tokenizer="camembert/camembert-base-ccnet") results = camembert_fill_mask("Le camembert est <mask> :)") # results #[{'sequence': '<s> Le camembert est bon :)</s>', 'score': 0.14011502265930176, 'token': 305}, # {'sequence': '<s> Le camembert est délicieux :)</s>', 'score': 0.13929404318332672, 'token': 11661}, # {'sequence': '<s> Le camembert est excellent :)</s>', 'score': 0.07010319083929062, 'token': 3497}, # {'sequence': '<s> Le camembert est parfait :)</s>', 'score': 0.025885622948408127, 'token': 2528}, # {'sequence': '<s> Le camembert est top :)</s>', 'score': 0.025684962049126625, 'token': 2328}] ``` ##### Extract contextual embedding features from Camembert output ```python import torch # Tokenize in sub-words with SentencePiece tokenized_sentence = tokenizer.tokenize("J'aime le camembert !") # ['▁J', "'", 'aime', '▁le', '▁cam', 'ember', 't', '▁!'] # 1-hot encode and add special starting and end tokens encoded_sentence = tokenizer.encode(tokenized_sentence) # [5, 133, 22, 1250, 16, 12034, 14324, 81, 76, 6] # NB: Can be done in one step : tokenize.encode("J'aime le camembert !") # Feed tokens to Camembert as a torch tensor (batch dim 1) encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0) embeddings, _ = camembert(encoded_sentence) # embeddings.detach() # embeddings.size torch.Size([1, 10, 768]) #tensor([[[ 0.0667, -0.2467, 0.0954, ..., 0.2144, 0.0279, 0.3621], # [-0.0472, 0.4092, -0.6602, ..., 0.2095, 0.1391, -0.0401], # [ 0.1911, -0.2347, -0.0811, ..., 0.4306, -0.0639, 0.1821], # ..., ``` ##### Extract contextual embedding features from all Camembert layers ```python from transformers import CamembertConfig # (Need to reload the model with new config) config = CamembertConfig.from_pretrained("camembert/camembert-base-ccnet", output_hidden_states=True) camembert = CamembertModel.from_pretrained("camembert/camembert-base-ccnet", config=config) embeddings, _, all_layer_embeddings = camembert(encoded_sentence) # all_layer_embeddings list of len(all_layer_embeddings) == 13 (input embedding layer + 12 self attention layers) all_layer_embeddings[5] # layer 5 contextual embedding : size torch.Size([1, 10, 768]) #tensor([[[ 0.0057, -0.1022, 0.0163, ..., -0.0675, -0.0360, 0.1078], # [-0.1096, -0.3344, -0.0593, ..., 0.1625, -0.0432, -0.1646], # [ 0.3751, -0.3829, 0.0844, ..., 0.1067, -0.0330, 0.3334], # ..., ``` ## Authors CamemBERT was trained and evaluated by Louis Martin\*, Benjamin Muller\*, Pedro Javier Ortiz Suárez\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot. ## Citation If you use our work, please cite: ```bibtex @inproceedings{martin2020camembert, title={CamemBERT: a Tasty French Language Model}, author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t}, booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics}, year={2020} } ```
aliosm/ai-soco-cpp-roberta-tiny
aliosm
2020-12-11T21:32:46Z
0
0
null
[ "exbert", "authorship-identification", "fire2020", "pan2020", "ai-soco", "dataset:ai-soco", "license:mit", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: "c++" tags: - exbert - authorship-identification - fire2020 - pan2020 - ai-soco license: "mit" datasets: - ai-soco metrics: - perplexity --- # ai-soco-c++-roberta-tiny ## Model description From scratch pre-trained RoBERTa model with 1 layers and 12 attention heads using [AI-SOCO](https://sites.google.com/view/ai-soco-2020) dataset which consists of C++ codes crawled from CodeForces website. ## Intended uses & limitations The model can be used to do code classification, authorship identification and other downstream tasks on C++ programming language. #### How to use You can use the model directly after tokenizing the text using the provided tokenizer with the model files. #### Limitations and bias The model is limited to C++ programming language only. ## Training data The model initialized randomly and trained using [AI-SOCO](https://sites.google.com/view/ai-soco-2020) dataset which contains 100K C++ source codes. ## Training procedure The model trained on Google Colab platform with 8 TPU cores for 200 epochs, 32\*8 batch size, 512 max sequence length and MLM objective. Other parameters were defaulted to the values mentioned in [`run_language_modelling.py`](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py) script. Each continues 4 spaces were converted to a single tab character (`\t`) before tokenization. ### BibTeX entry and citation info ```bibtex @inproceedings{ai-soco-2020-fire, title = "Overview of the {PAN@FIRE} 2020 Task on {Authorship Identification of SOurce COde (AI-SOCO)}", author = "Fadel, Ali and Musleh, Husam and Tuffaha, Ibraheem and Al-Ayyoub, Mahmoud and Jararweh, Yaser and Benkhelifa, Elhadj and Rosso, Paolo", booktitle = "Proceedings of The 12th meeting of the Forum for Information Retrieval Evaluation (FIRE 2020)", year = "2020" } ``` <a href="https://huggingface.co/exbert/?model=aliosm/ai-soco-c++-roberta-tiny"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
aliosm/ai-soco-cpp-roberta-tiny-96-clas
aliosm
2020-12-11T21:32:40Z
0
0
null
[ "exbert", "authorship-identification", "fire2020", "pan2020", "ai-soco", "classification", "dataset:ai-soco", "license:mit", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: "c++" tags: - exbert - authorship-identification - fire2020 - pan2020 - ai-soco - classification license: "mit" datasets: - ai-soco metrics: - accuracy --- # ai-soco-c++-roberta-tiny-96-clas ## Model description `ai-soco-c++-roberta-tiny-96` model fine-tuned on [AI-SOCO](https://sites.google.com/view/ai-soco-2020) task. #### How to use You can use the model directly after tokenizing the text using the provided tokenizer with the model files. #### Limitations and bias The model is limited to C++ programming language only. ## Training data The model initialized from [`ai-soco-c++-roberta-tiny-96`](https://github.com/huggingface/transformers/blob/master/model_cards/aliosm/ai-soco-c++-roberta-tiny-96) model and trained using [AI-SOCO](https://sites.google.com/view/ai-soco-2020) dataset to do text classification. ## Training procedure The model trained on Google Colab platform using V100 GPU for 10 epochs, 16 batch size, 512 max sequence length (sequences larger than 512 were truncated). Each continues 4 spaces were converted to a single tab character (`\t`) before tokenization. ## Eval results The model achieved 91.12%/91.02% accuracy on AI-SOCO task and ranked in the 7th place. ### BibTeX entry and citation info ```bibtex @inproceedings{ai-soco-2020-fire, title = "Overview of the {PAN@FIRE} 2020 Task on {Authorship Identification of SOurce COde (AI-SOCO)}", author = "Fadel, Ali and Musleh, Husam and Tuffaha, Ibraheem and Al-Ayyoub, Mahmoud and Jararweh, Yaser and Benkhelifa, Elhadj and Rosso, Paolo", booktitle = "Proceedings of The 12th meeting of the Forum for Information Retrieval Evaluation (FIRE 2020)", year = "2020" } ``` <a href="https://huggingface.co/exbert/?model=aliosm/ai-soco-c++-roberta-tiny-96-clas"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
aliosm/ai-soco-cpp-roberta-small
aliosm
2020-12-11T21:32:38Z
0
0
null
[ "exbert", "authorship-identification", "fire2020", "pan2020", "ai-soco", "dataset:ai-soco", "license:mit", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: "c++" tags: - exbert - authorship-identification - fire2020 - pan2020 - ai-soco license: "mit" datasets: - ai-soco metrics: - perplexity --- # ai-soco-c++-roberta-small ## Model description From scratch pre-trained RoBERTa model with 6 layers and 12 attention heads using [AI-SOCO](https://sites.google.com/view/ai-soco-2020) dataset which consists of C++ codes crawled from CodeForces website. ## Intended uses & limitations The model can be used to do code classification, authorship identification and other downstream tasks on C++ programming language. #### How to use You can use the model directly after tokenizing the text using the provided tokenizer with the model files. #### Limitations and bias The model is limited to C++ programming language only. ## Training data The model initialized randomly and trained using [AI-SOCO](https://sites.google.com/view/ai-soco-2020) dataset which contains 100K C++ source codes. ## Training procedure The model trained on Google Colab platform with 8 TPU cores for 200 epochs, 16\*8 batch size, 512 max sequence length and MLM objective. Other parameters were defaulted to the values mentioned in [`run_language_modelling.py`](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py) script. Each continues 4 spaces were converted to a single tab character (`\t`) before tokenization. ### BibTeX entry and citation info ```bibtex @inproceedings{ai-soco-2020-fire, title = "Overview of the {PAN@FIRE} 2020 Task on {Authorship Identification of SOurce COde (AI-SOCO)}", author = "Fadel, Ali and Musleh, Husam and Tuffaha, Ibraheem and Al-Ayyoub, Mahmoud and Jararweh, Yaser and Benkhelifa, Elhadj and Rosso, Paolo", booktitle = "Proceedings of The 12th meeting of the Forum for Information Retrieval Evaluation (FIRE 2020)", year = "2020" } ``` <a href="https://huggingface.co/exbert/?model=aliosm/ai-soco-c++-roberta-small"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
akhooli/mbart-large-cc25-ar-en
akhooli
2020-12-11T21:32:04Z
17
4
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "translation", "ar", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- tags: - translation language: - ar - en license: mit --- ### mbart-large-ar-en This is mbart-large-cc25, finetuned on a subset of the OPUS corpus for ar_en. Usage: see [example notebook](https://colab.research.google.com/drive/1I6RFOWMaTpPBX7saJYjnSTddW0TD6H1t?usp=sharing) Note: model has limited training set, not fully trained (do not use for production). Other models by me: [Abed Khooli](https://huggingface.co/akhooli)
ViktorAlm/electra-base-norwegian-uncased-discriminator
ViktorAlm
2020-12-11T21:30:55Z
7
0
transformers
[ "transformers", "pytorch", "tf", "electra", "pretraining", "no", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: no thumbnail: https://i.imgur.com/QqSEC5I.png --- # Norwegian Electra ![Image of norwegian electra](https://i.imgur.com/QqSEC5I.png) Trained on Oscar + wikipedia + opensubtitles + some other data I had with the awesome power of TPUs(V3-8) Use with caution. I have no downstream tasks in Norwegian to test on so I have no idea of its performance yet. # Model ## Electra: Pre-training Text Encoders as Discriminators Rather Than Generators Kevin Clark and Minh-Thang Luong and Quoc V. Le and Christopher D. Manning - https://openreview.net/pdf?id=r1xMH1BtvB - https://github.com/google-research/electra # Acknowledgments ### TensorFlow Research Cloud Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ - https://www.tensorflow.org/tfrc #### OSCAR corpus - https://oscar-corpus.com/ #### OPUS - http://opus.nlpl.eu/ - http://www.opensubtitles.org/
Rostlab/prot_bert_bfd
Rostlab
2020-12-11T21:30:10Z
47,440
15
transformers
[ "transformers", "pytorch", "tf", "fill-mask", "protein language model", "dataset:BFD", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: protein tags: - protein language model datasets: - BFD --- # ProtBert-BFD model Pretrained model on protein sequences using a masked language modeling (MLM) objective. It was introduced in [this paper](https://doi.org/10.1101/2020.07.12.199554) and first released in [this repository](https://github.com/agemagician/ProtTrans). This model is trained on uppercase amino acids: it only works with capital letter amino acids. ## Model description ProtBert-BFD is based on Bert model which pretrained on a large corpus of protein sequences in a self-supervised fashion. This means it was pretrained on the raw protein sequences only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those protein sequences. One important difference between our Bert model and the original Bert version is the way of dealing with sequences as separate documents This means the Next sentence prediction is not used, as each sequence is treated as a complete document. The masking follows the original Bert training with randomly masks 15% of the amino acids in the input. At the end, the feature extracted from this model revealed that the LM-embeddings from unlabeled data (only protein sequences) captured important biophysical properties governing protein shape. This implied learning some of the grammar of the language of life realized in protein sequences. ## Intended uses & limitations The model could be used for protein feature extraction or to be fine-tuned on downstream tasks. We have noticed in some tasks you could gain more accuracy by fine-tuning the model rather than using it as a feature extractor. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import BertForMaskedLM, BertTokenizer, pipeline >>> tokenizer = BertTokenizer.from_pretrained('Rostlab/prot_bert_bfd', do_lower_case=False ) >>> model = BertForMaskedLM.from_pretrained("Rostlab/prot_bert_bfd") >>> unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer) >>> unmasker('D L I P T S S K L V V [MASK] D T S L Q V K K A F F A L V T') [{'score': 0.1165614128112793, 'sequence': '[CLS] D L I P T S S K L V V L D T S L Q V K K A F F A L V T [SEP]', 'token': 5, 'token_str': 'L'}, {'score': 0.08976086974143982, 'sequence': '[CLS] D L I P T S S K L V V V D T S L Q V K K A F F A L V T [SEP]', 'token': 8, 'token_str': 'V'}, {'score': 0.08864385634660721, 'sequence': '[CLS] D L I P T S S K L V V S D T S L Q V K K A F F A L V T [SEP]', 'token': 10, 'token_str': 'S'}, {'score': 0.06227643042802811, 'sequence': '[CLS] D L I P T S S K L V V A D T S L Q V K K A F F A L V T [SEP]', 'token': 6, 'token_str': 'A'}, {'score': 0.06194969266653061, 'sequence': '[CLS] D L I P T S S K L V V T D T S L Q V K K A F F A L V T [SEP]', 'token': 15, 'token_str': 'T'}] ``` Here is how to use this model to get the features of a given protein sequence in PyTorch: ```python from transformers import BertModel, BertTokenizer import re tokenizer = BertTokenizer.from_pretrained('Rostlab/prot_bert_bfd', do_lower_case=False ) model = BertModel.from_pretrained("Rostlab/prot_bert_bfd") sequence_Example = "A E T C Z A O" sequence_Example = re.sub(r"[UZOB]", "X", sequence_Example) encoded_input = tokenizer(sequence_Example, return_tensors='pt') output = model(**encoded_input) ``` ## Training data The ProtBert-BFD model was pretrained on [BFD](https://bfd.mmseqs.com/), a dataset consisting of 2.1 billion protein sequences. ## Training procedure ### Preprocessing The protein sequences are uppercased and tokenized using a single space and a vocabulary size of 21. The inputs of the model are then of the form: ``` [CLS] Protein Sequence A [SEP] Protein Sequence B [SEP] ``` Furthermore, each protein sequence was treated as a separate document. The preprocessing step was performed twice, once for a combined length (2 sequences) of less than 512 amino acids, and another time using a combined length (2 sequences) of less than 2048 amino acids. The details of the masking procedure for each sequence followed the original Bert model as following: - 15% of the amino acids are masked. - In 80% of the cases, the masked amino acids are replaced by `[MASK]`. - In 10% of the cases, the masked amino acids are replaced by a random amino acid (different) from the one they replace. - In the 10% remaining cases, the masked amino acids are left as is. ### Pretraining The model was trained on a single TPU Pod V3-1024 for one million steps in total. 800k steps using sequence length 512 (batch size 32k), and 200K steps using sequence length 2048 (batch size 6k). The optimizer used is Lamb with a learning rate of 0.002, a weight decay of 0.01, learning rate warmup for 140k steps and linear decay of the learning rate after. ## Evaluation results When fine-tuned on downstream tasks, this model achieves the following results: Test results : | Task/Dataset | secondary structure (3-states) | secondary structure (8-states) | Localization | Membrane | |:-----:|:-----:|:-----:|:-----:|:-----:| | CASP12 | 76 | 65 | | | | TS115 | 84 | 73 | | | | CB513 | 83 | 70 | | | | DeepLoc | | | 78 | 91 | ### BibTeX entry and citation info ```bibtex @article {Elnaggar2020.07.12.199554, author = {Elnaggar, Ahmed and Heinzinger, Michael and Dallago, Christian and Rehawi, Ghalia and Wang, Yu and Jones, Llion and Gibbs, Tom and Feher, Tamas and Angerer, Christoph and Steinegger, Martin and BHOWMIK, DEBSINDHU and Rost, Burkhard}, title = {ProtTrans: Towards Cracking the Language of Life{\textquoteright}s Code Through Self-Supervised Deep Learning and High Performance Computing}, elocation-id = {2020.07.12.199554}, year = {2020}, doi = {10.1101/2020.07.12.199554}, publisher = {Cold Spring Harbor Laboratory}, abstract = {Computational biology and bioinformatics provide vast data gold-mines from protein sequences, ideal for Language Models (LMs) taken from Natural Language Processing (NLP). These LMs reach for new prediction frontiers at low inference costs. Here, we trained two auto-regressive language models (Transformer-XL, XLNet) and two auto-encoder models (Bert, Albert) on data from UniRef and BFD containing up to 393 billion amino acids (words) from 2.1 billion protein sequences (22- and 112 times the entire English Wikipedia). The LMs were trained on the Summit supercomputer at Oak Ridge National Laboratory (ORNL), using 936 nodes (total 5616 GPUs) and one TPU Pod (V3-512 or V3-1024). We validated the advantage of up-scaling LMs to larger models supported by bigger data by predicting secondary structure (3-states: Q3=76-84, 8 states: Q8=65-73), sub-cellular localization for 10 cellular compartments (Q10=74) and whether a protein is membrane-bound or water-soluble (Q2=89). Dimensionality reduction revealed that the LM-embeddings from unlabeled data (only protein sequences) captured important biophysical properties governing protein shape. This implied learning some of the grammar of the language of life realized in protein sequences. The successful up-scaling of protein LMs through HPC to larger data sets slightly reduced the gap between models trained on evolutionary information and LMs. Availability ProtTrans: \&lt;a href="https://github.com/agemagician/ProtTrans"\&gt;https://github.com/agemagician/ProtTrans\&lt;/a\&gt;Competing Interest StatementThe authors have declared no competing interest.}, URL = {https://www.biorxiv.org/content/early/2020/07/21/2020.07.12.199554}, eprint = {https://www.biorxiv.org/content/early/2020/07/21/2020.07.12.199554.full.pdf}, journal = {bioRxiv} } ``` > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
Ogayo/Hel-ach-en
Ogayo
2020-12-11T21:30:01Z
15
0
transformers
[ "transformers", "pytorch", "marian", "text2text-generation", "translation", "ach", "en", "dataset:JW300", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - ach - en tags: - translation license: cc-by-4.0 datasets: - JW300 metrics: - bleu --- # HEL-ACH-EN ## Model description MT model translating Acholi to English initialized with weights from [opus-mt-luo-en](https://huggingface.co/Helsinki-NLP/opus-mt-luo-en) on HuggingFace. ## Intended uses & limitations Machine Translation experiments. Do not use for sensitive tasks. #### How to use ```python # You can include sample code which will be formatted from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("Ogayo/Hel-ach-en") model = AutoModelForSeq2SeqLM.from_pretrained("Ogayo/Hel-ach-en") ``` #### Limitations and bias Trained on Jehovah Witnesses data so contains theirs and Christian views. ## Training data Trained on OPUS JW300 data. Initialized with weights from [opus-mt-luo-en](https://huggingface.co/Helsinki-NLP/opus-mt-luo-en?text=Bed+gi+nyasi+mar+chieng%27+nyuol+mopong%27+gi+mor%21#model_card) ## Training procedure Remove duplicates and rows with no alphabetic characters. Used GPU ## Eval results testset | BLEU --- | --- JW300.luo.en| 46.1
cinmodel/electra-small-japanese-generator
cinmodel
2020-12-11T21:26:17Z
6
2
transformers
[ "transformers", "pytorch", "electra", "fill-mask", "ja", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: ja --- ## Japanese ELECTRA-small We provide a Japanese **ELECTRA-Small** model, as described in [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB). Our pretraining process employs subword units derived from the [Japanese Wikipedia](https://dumps.wikimedia.org/jawiki/latest), using the [Byte-Pair Encoding](https://www.aclweb.org/anthology/P16-1162.pdf) method and building on an initial tokenization with [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd). For optimal performance, please take care to set your MeCab dictionary appropriately. ``` # ELECTRA-small generator usage from transformers import BertJapaneseTokenizer, ElectraForMaskedLM tokenizer = BertJapaneseTokenizer.from_pretrained('Cinnamon/electra-small-japanese-generator', mecab_kwargs={"mecab_option": "-d /usr/lib/x86_64-linux-gnu/mecab/dic/mecab-ipadic-neologd"}) model = ElectraForMaskedLM.from_pretrained('Cinnamon/electra-small-japanese-generator') ```
nielsr/tapas-base
nielsr
2020-12-11T11:12:17Z
3
0
transformers
[ "transformers", "pytorch", "tapas", "feature-extraction", "sequence-classification", "en", "arxiv:2004.02349", "arxiv:2010.00571", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: en tags: - tapas - sequence-classification license: apache-2.0 --- # TAPAS base model This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_inter_masklm_base_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training. It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table). The other (non-default) version which can be used is the one with absolute position embeddings: - `revision="v1"`, which corresponds to `tapas_inter_masklm_base` Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors. ## Model description TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text. - Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements. This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table. Fine-tuning is done by adding one or more classification heads on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on a downstream task. ## Intended uses & limitations You can use the raw model for getting hidden representatons about table-question pairs, but it's mostly intended to be fine-tuned on a downstream task such as question answering or sequence classification. See the [model hub](https://huggingface.co/models?filter=tapas) to look for fine-tuned versions on a task that interests you. ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence [SEP] Flattened table [SEP] ``` ### Pre-training The model was pre-trained on 32 Cloud TPU v3 cores for 1,000,000 steps with maximum sequence length 512 and batch size of 512. In this setup, pre-training on MLM only takes around 3 days. Aditionally, the model has been further pre-trained on a second task (table entailment). See the original TAPAS [paper](https://www.aclweb.org/anthology/2020.acl-main.398/) and the [follow-up paper](https://www.aclweb.org/anthology/2020.findings-emnlp.27/) for more details. The optimizer used is Adam with a learning rate of 5e-5, and a warmup ratio of 0.01. ### BibTeX entry and citation info ```bibtex @misc{herzig2020tapas, title={TAPAS: Weakly Supervised Table Parsing via Pre-training}, author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos}, year={2020}, eprint={2004.02349}, archivePrefix={arXiv}, primaryClass={cs.IR} } ``` ```bibtex @misc{eisenschlos2020understanding, title={Understanding tables with intermediate pre-training}, author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller}, year={2020}, eprint={2010.00571}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
stefan-it/flair-ner-conll03
stefan-it
2020-12-11T10:07:20Z
7
0
flair
[ "flair", "pytorch", "sequence-tagger-model", "en", "license:mit", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en tags: - flair - sequence-tagger-model license: mit --- # CoNLL-2003 NER Model Imported sequence tagger model for Flair, that was trained on English CoNLL-2003 corpus for NER.
bewgle/bart-large-mnli-bewgle
bewgle
2020-12-09T18:30:05Z
5
0
transformers
[ "transformers", "pytorch", "bart", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- widget : - text: "I like you. </s></s> I love you." --- ## bart-large-mnli Trained by Facebook, [original source](https://github.com/pytorch/fairseq/tree/master/examples/bart)
google/t5-11b-ssm-wqo
google
2020-12-07T08:47:33Z
0
1
null
[ "en", "dataset:c4", "dataset:wikipedia", "dataset:web_questions", "arxiv:2002.08909", "arxiv:1910.10683", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en datasets: - c4 - wikipedia - web_questions license: apache-2.0 --- [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**. The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4), subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia), and finally fine-tuned on [Web Questions (WQ)](https://huggingface.co/datasets/web_questions). **Note**: The model was fine-tuned on 90% of the train splits of [Web Questions (WQ)](https://huggingface.co/datasets/web_questions) for 20k steps and validated on the held-out 10% of the train split. Other community Checkpoints: [here](https://huggingface.co/models?search=ssm) Paper: [How Much Knowledge Can You Pack Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf) Authors: *Adam Roberts, Colin Raffel, Noam Shazeer* ## Results on Web Questions - Test Set |Id | link | Exact Match | |---|---|---| |**T5-11b**|**https://huggingface.co/google/t5-11b-ssm-wqo**|**40.8**| |T5-xxl|https://huggingface.co/google/t5-xxl-ssm-wqo|42.8| ## Usage The model can be used as follows for **closed book question answering**: ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-11b-ssm-wqo") t5_tok = AutoTokenizer.from_pretrained("google/t5-11b-ssm-wqo") input_ids = t5_tok("When was Franklin D. Roosevelt born?", return_tensors="pt").input_ids gen_output = t5_qa_model.generate(input_ids)[0] print(t5_tok.decode(gen_output, skip_special_tokens=True)) ``` ## Abstract It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa. ![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/how_much_know_ledge_image.png)
joelniklaus/distilbert-based-german-cased-ler
joelniklaus
2020-11-30T12:52:05Z
5
0
transformers
[ "transformers", "pytorch", "tf", "distilbert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# distilbert-base-german-cased-ler Task: ler Base Model: distilbert-base-german-cased Trained for 3 epochs Batch-size: 12 Seed: 42 Test F1-Score: 0.936
seduerr/t5_base_paws_ger
seduerr
2020-11-30T11:17:06Z
16
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
# T5 Base with Paraphrases in German Language This T5 base model has been trained with the German part of the PAWS-X data set. It can be used as any T5 model and will generated paraphrases with the prompt keyword: 'paraphrase: '__GermanSentence__ Please contact me, if you need more information ([email protected]). Thank you. Sebastian
sshleifer/distill-pegasus-xsum-16-4
sshleifer
2020-10-14T16:16:54Z
16
3
transformers
[ "transformers", "pytorch", "pegasus", "text2text-generation", "summarization", "en", "arxiv:1912.08777", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- language: en tags: - summarization --- ### Pegasus Models See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html) Original TF 1 code [here](https://github.com/google-research/pegasus) Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019 Maintained by: [@sshleifer](https://twitter.com/sam_shleifer) Task: Summarization The following is copied from the authors' README. # Mixed & Stochastic Checkpoints We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table. | dataset | C4 | HugeNews | Mixed & Stochastic| | ---- | ---- | ---- | ----| | xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64| | cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30| | newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18| | multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95| | gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76| | wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *| | reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94| | big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *| | arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67| | pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25| | aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51| | billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59| The "Mixed & Stochastic" model has the following changes: - trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples). - trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity). - the model uniformly sample a gap sentence ratio between 15% and 45%. - importance sentences are sampled using a 20% uniform noise to importance scores. - the sentencepiece tokenizer is updated to be able to encode newline character. (*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data: - wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information. - we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS. The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper): trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples). trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity). the model uniformly sample a gap sentence ratio between 15% and 45%. importance sentences are sampled using a 20% uniform noise to importance scores. the sentencepiece tokenizer is updated to be able to encode newline character. Citation ``` @misc{zhang2019pegasus, title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization}, author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu}, year={2019}, eprint={1912.08777}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
deep-learning-analytics/triviaqa-t5-base
deep-learning-analytics
2020-09-30T18:50:48Z
55
3
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "triviaqa", "t5-base", "lm-head", "question-answering", "closed-book", "pipeline:question-answering", "eng", "dataset:triviaqa", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: "eng" tags: - triviaqa - t5-base - pytorch - lm-head - question-answering - closed-book - t5 - pipeline:question-answering datasets: - triviaqa widget: - text: ["Mount Everest is found in which mountain range?","None"] metrics: - EM: 17 - Subset match: 24.5 --- # Model name Closed Book Trivia-QA T5 base ## Model description This is a T5-base model trained on No Context Trivia QA data set. The input to the model is a Trivia type question. The model is tuned to search for the answer in its memory to return it. The pretrained model used here was trained on Common Crawl (C4) data set. The model was trained for 135 epochs using a batch size of 32 and learning rate of 1e-3. Max_input_lngth is set as 25 and max_output_length is 10. Model attained an EM score of 17 and a Subset Match score of 24.5 We have written a blog post that covers the training procedure. Please find it [here](https://medium.com/@priya.dwivedi/build-a-trivia-bot-using-t5-transformer-345ff83205b6). Test the model on Trivia Questions from the websites below: https://www.triviaquestionss.com/easy-trivia-questions/ https://laffgaff.com/easy-trivia-questions-and-answers/ ## Usage ``` from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("deep-learning-analytics/triviaqa-t5-base") model = AutoModelWithLMHead.from_pretrained("deep-learning-analytics/triviaqa-t5-base") device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model = model.to(device) text = "Who directed the movie Jaws?" preprocess_text = text.strip().replace("\n","") tokenized_text = tokenizer.encode(preprocess_text, return_tensors="pt").to(device) outs = model.model.generate( tokenized_text, max_length=10, num_beams=2, early_stopping=True ) dec = [tokenizer.decode(ids) for ids in outs] print("Predicted Answer: ", dec) ```
sshleifer/bb3b-tok
sshleifer
2020-09-25T18:06:31Z
3
0
transformers
[ "transformers", "blenderbot", "text2text-generation", "translation", "facebook", "convAI", "en", "dataset:blended_skill_talk", "arxiv:1907.06616", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - en thumbnail: tags: - translation - facebook - convAI license: apache-2.0 datasets: - blended_skill_talk metrics: - perplexity --- # Blenderbot-3B ## Model description + [Paper](https://arxiv.org/abs/1907.06616). + [Original PARLAI Code] The abbreviation FSMT stands for FairSeqMachineTranslation All four models are available: * [wmt19-en-ru](https://huggingface.co/facebook/wmt19-en-ru) * [wmt19-ru-en](https://huggingface.co/facebook/wmt19-ru-en) * [wmt19-en-de](https://huggingface.co/facebook/wmt19-en-de) * [wmt19-de-en](https://huggingface.co/facebook/wmt19-de-en) ## Intended uses & limitations #### How to use ```python from transformers.tokenization_fsmt import FSMTTokenizer from transformers.modeling_fsmt import FSMTForConditionalGeneration mname = "facebook/wmt19-en-ru" tokenizer = FSMTTokenizer.from_pretrained(mname) model = FSMTForConditionalGeneration.from_pretrained(mname) input = "Machine learning is great, isn't it?" input_ids = tokenizer.encode(input, return_tensors="pt") outputs = model.generate(input_ids) decoded = tokenizer.decode(outputs[0], skip_special_tokens=True) print(decoded) # Машинное обучение - это здорово, не так ли? ``` #### Limitations and bias - The original (and this ported model) doesn't seem to handle well inputs with repeated sub-phrases, [content gets truncated](https://discuss.huggingface.co/t/issues-with-translating-inputs-containing-repeated-phrases/981) ## Training data Pretrained weights were left identical to the original model released by fairseq. For more details, please, see the [paper](https://arxiv.org/abs/1907.06616). ## Eval results pair | fairseq | transformers -------|---------|---------- en-ru | [36.4](http://matrix.statmt.org/matrix/output/1914?run_id=6724) | 33.47 The score is slightly below the score reported by `fairseq`, since `transformers`` currently doesn't support: - model ensemble, therefore the best performing checkpoint was ported (``model4.pt``). - re-ranking The score was calculated using this code: ```bash git clone https://github.com/huggingface/transformers cd transformers export PAIR=en-ru export DATA_DIR=data/$PAIR export SAVE_DIR=data/$PAIR export BS=8 export NUM_BEAMS=15 mkdir -p $DATA_DIR sacrebleu -t wmt19 -l $PAIR --echo src > $DATA_DIR/val.source sacrebleu -t wmt19 -l $PAIR --echo ref > $DATA_DIR/val.target echo $PAIR PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py facebook/wmt19-$PAIR $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS ``` note: fairseq reports using a beam of 50, so you should get a slightly higher score if re-run with `--num_beams 50`. ## Data Sources - [training, etc.](http://www.statmt.org/wmt19/) - [test set](http://matrix.statmt.org/test_sets/newstest2019.tgz?1556572561) ### BibTeX entry and citation info ```bibtex @inproceedings{..., year={2020}, title={Facebook FAIR's WMT19 News Translation Task Submission}, author={Ng, Nathan and Yee, Kyra and Baevski, Alexei and Ott, Myle and Auli, Michael and Edunov, Sergey}, booktitle={Proc. of WMT}, } ``` ## TODO - port model ensemble (fairseq uses 4 model checkpoints)
monsoon-nlp/hindi-tpu-electra
monsoon-nlp
2020-08-26T22:19:45Z
39
1
transformers
[ "transformers", "pytorch", "tf", "electra", "feature-extraction", "hi", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: hi --- # Hindi language model ## Trained with ELECTRA base size settings <a href="https://colab.research.google.com/drive/1R8TciRSM7BONJRBc9CBZbzOmz39FTLl_">Tokenization and training CoLab</a> ## Example Notebooks This model outperforms Multilingual BERT on <a href="https://colab.research.google.com/drive/1UYn5Th8u7xISnPUBf72at1IZIm3LEDWN">Hindi movie reviews / sentiment analysis</a> (using SimpleTransformers) You can get higher accuracy using ktrain + TensorFlow, where you can adjust learning rate and other hyperparameters: https://colab.research.google.com/drive/1mSeeSfVSOT7e-dVhPlmSsQRvpn6xC05w?usp=sharing Question-answering on MLQA dataset: https://colab.research.google.com/drive/1i6fidh2tItf_-IDkljMuaIGmEU6HT2Ar#scrollTo=IcFoAHgKCUiQ A smaller model (<a href="https://huggingface.co/monsoon-nlp/hindi-bert">Hindi-BERT</a>) performs better on a BBC news classification task. ## Corpus The corpus is two files: - Hindi CommonCrawl deduped by OSCAR https://traces1.inria.fr/oscar/ - latest Hindi Wikipedia ( https://dumps.wikimedia.org/hiwiki/ ) + WikiExtractor to txt Bonus notes: - Adding English wiki text or parallel corpus could help with cross-lingual tasks and training ## Vocabulary https://drive.google.com/file/d/1-6tXrii3tVxjkbrpSJE9MOG_HhbvP66V/view?usp=sharing Bonus notes: - Created with HuggingFace Tokenizers; you can increase vocabulary size and re-train; remember to change ELECTRA vocab_size ## Training Structure your files, with data-dir named "trainer" here ``` trainer - vocab.txt - pretrain_tfrecords -- (all .tfrecord... files) - models -- modelname --- checkpoint --- graph.pbtxt --- model.* ``` ## Conversion Use this process to convert an in-progress or completed ELECTRA checkpoint to a Transformers-ready model: ``` git clone https://github.com/huggingface/transformers python ./transformers/src/transformers/convert_electra_original_tf_checkpoint_to_pytorch.py --tf_checkpoint_path=./models/checkpointdir --config_file=config.json --pytorch_dump_path=pytorch_model.bin --discriminator_or_generator=discriminator python ``` ``` from transformers import TFElectraForPreTraining model = TFElectraForPreTraining.from_pretrained("./dir_with_pytorch", from_pt=True) model.save_pretrained("tf") ``` Once you have formed one directory with config.json, pytorch_model.bin, tf_model.h5, special_tokens_map.json, tokenizer_config.json, and vocab.txt on the same level, run: ``` transformers-cli upload directory ```
ishan/distilbert-base-uncased-mnli
ishan
2020-08-21T10:23:40Z
10
1
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "en", "dataset:MNLI", "arxiv:1810.04805", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: en thumbnail: tags: - pytorch - text-classification datasets: - MNLI --- # distilbert-base-uncased finetuned on MNLI ## Model Details and Training Data We used the pretrained model from [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) and finetuned it on [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) dataset. The training parameters were kept the same as [Devlin et al., 2019](https://arxiv.org/abs/1810.04805) (learning rate = 2e-5, training epochs = 3, max_sequence_len = 128 and batch_size = 32). ## Evaluation Results The evaluation results are mentioned in the table below. | Test Corpus | Accuracy | |:---:|:---------:| | Matched | 0.8223 | | Mismatched | 0.8216 |
textattack/facebook-bart-base-RTE
textattack
2020-08-20T15:50:48Z
5
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
## TextAttack Model CardSince this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.7256317689530686, as measured by the eval set accuracy, found after 4 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
textattack/facebook-bart-base-glue-RTE
textattack
2020-08-20T15:49:05Z
5
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
## TextAttack Model Cardrate of 2e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.7256317689530686, as measured by the eval set accuracy, found after 4 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
sampathkethineedi/industry-classification
sampathkethineedi
2020-07-16T15:27:38Z
1,545
22
transformers
[ "transformers", "pytorch", "tf", "distilbert", "text-classification", "tensorflow", "industry", "buisiness", "description", "multi-class", "classification", "en", "autotrain_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: "en" thumbnail: "https://huggingface.co/sampathkethineedi" tags: - distilbert - pytorch - tensorflow - text-classification - industry - buisiness - description - multi-class - classification liscence: "mit" inference: false --- # industry-classification ## Model description DistilBERT Model to classify a business description into one of **62 industry tags**. Trained on 7000 samples of Business Descriptions and associated labels of companies in India. ## How to use PyTorch and TF models available ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline tokenizer = AutoTokenizer.from_pretrained("sampathkethineedi/industry-classification") model = AutoModelForSequenceClassification.from_pretrained("sampathkethineedi/industry-classification") industry_tags = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer) industry_tags("Stellar Capital Services Limited is an India-based non-banking financial company ... loan against property, management consultancy, personal loans and unsecured loans.") '''Ouput''' [{'label': 'Consumer Finance', 'score': 0.9841355681419373}] ``` ## Limitations and bias Training data is only for Indian companies
textattack/distilbert-base-uncased-ag-news
textattack
2020-07-07T22:01:14Z
462
1
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
## TextAttack Model CardThis `distilbert-base-uncased` model was fine-tuned for sequence classification using TextAttack and the ag_news dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 32, a learning rate of 2e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.9478947368421052, as measured by the eval set accuracy, found after 1 epoch. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
textattack/albert-base-v2-yelp-polarity
textattack
2020-07-06T16:37:10Z
195
3
transformers
[ "transformers", "pytorch", "albert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
## TextAttack Model Card This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack and the yelp_polarity dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 3e-05, and a maximum sequence length of 512. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.975078947368421, as measured by the eval set accuracy, found after 3 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
textattack/albert-base-v2-snli
textattack
2020-07-06T16:36:47Z
10
1
transformers
[ "transformers", "pytorch", "albert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
## TextAttack Model Card This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack and the snli dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 64, a learning rate of 2e-05, and a maximum sequence length of 64. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.9060150375939849, as measured by the eval set accuracy, found after 2 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
textattack/xlnet-base-cased-rotten-tomatoes
textattack
2020-07-06T16:36:38Z
10
0
transformers
[ "transformers", "pytorch", "xlnet", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
## TextAttack Model Card This `xlnet-base-cased` model was fine-tuned for sequence classification using TextAttack and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 2e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.9071294559099438, as measured by the eval set accuracy, found after 2 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
textattack/xlnet-base-cased-imdb
textattack
2020-07-06T16:35:25Z
9
0
transformers
[ "transformers", "pytorch", "xlnet", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
## TextAttack Model Card This `xlnet-base-cased` model was fine-tuned for sequence classification using TextAttack and the imdb dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 32, a learning rate of 2e-05, and a maximum sequence length of 512. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.95352, as measured by the eval set accuracy, found after 2 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
textattack/albert-base-v2-imdb
textattack
2020-07-06T16:34:24Z
1,003
1
transformers
[ "transformers", "pytorch", "albert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
## TextAttack Model Card This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack and the imdb dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 32, a learning rate of 2e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.89236, as measured by the eval set accuracy, found after 3 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
textattack/albert-base-v2-WNLI
textattack
2020-07-06T16:33:17Z
3
0
transformers
[ "transformers", "pytorch", "albert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
## TextAttack Model Card This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack and the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 64, a learning rate of 2e-05, and a maximum sequence length of 256. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.5915492957746479, as measured by the eval set accuracy, found after 0 epoch. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
textattack/xlnet-base-cased-RTE
textattack
2020-07-06T16:32:05Z
5
0
transformers
[ "transformers", "pytorch", "xlnet", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
## TextAttack Model Card This `xlnet-base-cased` model was fine-tuned for sequence classification using TextAttack and the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 2e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.7111913357400722, as measured by the eval set accuracy, found after 3 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
textattack/distilbert-base-uncased-MRPC
textattack
2020-07-06T16:30:12Z
31
1
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
## TextAttack Model Card This `distilbert-base-uncased` model was fine-tuned for sequence classification using TextAttack and the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 32, a learning rate of 2e-05, and a maximum sequence length of 256. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.8578431372549019, as measured by the eval set accuracy, found after 1 epoch. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
textattack/xlnet-base-cased-CoLA
textattack
2020-07-06T16:29:34Z
13
0
transformers
[ "transformers", "pytorch", "xlnet", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
## TextAttack Model Cardfor 5 epochs with a batch size of 32, a learning rate of 3e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.7976989453499521, as measured by the eval set accuracy, found after 2 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
textattack/distilbert-base-uncased-CoLA
textattack
2020-07-06T16:29:03Z
3,039
3
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
## TextAttack Model Cardand the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 64, a learning rate of 3e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.8235858101629914, as measured by the eval set accuracy, found after 2 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
rodgersearl147/blockassist
rodgersearl147
2025-09-23T12:32:15Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "placid vicious ram", "arxiv:2504.07091", "region:us" ]
null
2025-09-21T03:41:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - placid vicious ram --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ryno01/Qwen3-0.6B-Gensyn-Swarm-moist_quick_heron
ryno01
2025-09-23T12:32:14Z
100
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am moist_quick_heron", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-17T21:25:29Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am moist_quick_heron --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gbatubara/Qwen3-0.6B-Gensyn-Swarm-masked_vigilant_boar
gbatubara
2025-09-23T12:32:09Z
187
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am masked_vigilant_boar", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-14T16:26:05Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am masked_vigilant_boar --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hfsf3283/blockassist
hfsf3283
2025-09-23T12:32:02Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sharp bold mole", "arxiv:2504.07091", "region:us" ]
null
2025-09-20T11:56:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sharp bold mole --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Pastu9999/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thriving_miniature_crane
Pastu9999
2025-09-23T12:31:42Z
203
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am thriving_miniature_crane", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-01T14:07:02Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am thriving_miniature_crane --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
beyoru/Lunaa
beyoru
2025-09-23T12:31:38Z
6
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "roleplay", "chat", "rp", "character", "waifu", "natural converation", "creative writing", "storytelling", "sfw", "reasoning", "conversational", "en", "zh", "vi", "license:creativeml-openrail-m", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-21T12:18:43Z
--- library_name: transformers tags: - roleplay - chat - rp - character - waifu - character - natural converation - creative writing - storytelling - sfw - reasoning license: creativeml-openrail-m language: - en - zh - vi --- # 🌙 Lunaa – Roleplay Chat Model Lunaa is a conversational AI model designed for **immersive roleplay (RP)** and natural chatting. It is fine-tuned to respond in a more engaging, character-driven style compared to standard instruction-tuned models. <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/65905af887944e494e37e09a/XB1XEInyfE3dyUNAGb5zF.webp" width="300"> </p> ## Notes: - Optimized for **roleplay-style conversations** - Flexible: can be used for creative writing, storytelling, or character interactions - You should describe the system prompt for your character. - **Reasoning** and non reasoning for fast interface as usage of Qwen3 (`enable_thinking=True/False`). ## Support me at: <p align="center"> <a href="https://www.buymeacoffee.com/ductransa0g" target="_blank"> <img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" width="150px"> </a> </p> ## Cite: ``` @misc{Lunaa, title = {Lunaa – Roleplay Chat Model}, author = {Beyoru}, year = {2025}, howpublished = {\url{https://huggingface.co/beyoru/Lunaa}} } ```
keithmansell71/blockassist
keithmansell71
2025-09-23T12:31:24Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "aquatic shy elephant", "arxiv:2504.07091", "region:us" ]
null
2025-09-21T03:53:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - aquatic shy elephant --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
abhijithmallya/Whisper-Indian-English-ct2
abhijithmallya
2025-09-23T12:31:22Z
0
0
null
[ "automatic-speech-recognition", "dataset:WillHeld/india_accent_cv", "base_model:openai/whisper-large-v3", "base_model:finetune:openai/whisper-large-v3", "license:apache-2.0", "region:us" ]
automatic-speech-recognition
2025-09-23T12:29:39Z
--- license: apache-2.0 datasets: - WillHeld/india_accent_cv base_model: - openai/whisper-large-v3 pipeline_tag: automatic-speech-recognition ---
gbn186564/blockassist
gbn186564
2025-09-23T12:31:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "soft flightless heron", "arxiv:2504.07091", "region:us" ]
null
2025-09-20T12:06:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - soft flightless heron --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
alsandeer33/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flightless_arctic_kangaroo
alsandeer33
2025-09-23T12:31:09Z
7
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am flightless arctic kangaroo", "trl", "genrl-swarm", "I am flightless_arctic_kangaroo", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T13:54:45Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flightless_arctic_kangaroo tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am flightless arctic kangaroo - trl - genrl-swarm - I am flightless_arctic_kangaroo licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flightless_arctic_kangaroo This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="alsandeer33/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flightless_arctic_kangaroo", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```