modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-24 06:28:30
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
573 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-24 06:27:16
card
stringlengths
11
1.01M
lg/ghpy_40k
lg
2021-05-20T23:37:47Z
3
0
transformers
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# This model is probably not what you're looking for.
lg/openinstruct_1k1
lg
2021-05-20T23:37:33Z
6
0
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
# This model is probably not what you're looking for.
lg/fexp_1
lg
2021-05-20T23:37:11Z
5
0
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
# This model is probably not what you're looking for.
verissimomanoel/RobertaTwitterBR
verissimomanoel
2021-05-20T22:53:32Z
5
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
### Twitter RoBERTa BR This is a RoBERTa Twitter in Portuguese model trained on ~7M tweets. The results will be posted in the future. ### Example of using ``` tokenizer = AutoTokenizer.from_pretrained("verissimomanoel/RobertaTwitterBR") model = AutoModel.from_pretrained("verissimomanoel/RobertaTwitterBR") ```
urduhack/roberta-urdu-small
urduhack
2021-05-20T22:52:23Z
884
8
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "roberta-urdu-small", "urdu", "ur", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: ur thumbnail: https://raw.githubusercontent.com/urduhack/urduhack/master/docs/_static/urduhack.png tags: - roberta-urdu-small - urdu - transformers license: mit --- ## roberta-urdu-small [![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/urduhack/urduhack/blob/master/LICENSE) ### Overview **Language model:** roberta-urdu-small **Model size:** 125M **Language:** Urdu **Training data:** News data from urdu news resources in Pakistan ### About roberta-urdu-small roberta-urdu-small is a language model for urdu language. ``` from transformers import pipeline fill_mask = pipeline("fill-mask", model="urduhack/roberta-urdu-small", tokenizer="urduhack/roberta-urdu-small") ``` ## Training procedure roberta-urdu-small was trained on urdu news corpus. Training data was normalized using normalization module from urduhack to eliminate characters from other languages like arabic. ### About Urduhack Urduhack is a Natural Language Processing (NLP) library for urdu language. Github: https://github.com/urduhack/urduhack
tlemberger/sd-ner
tlemberger
2021-05-20T22:31:05Z
4
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "token-classification", "token classification", "dataset:EMBO/sd-panels", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - english thumbnail: tags: - token classification license: datasets: - EMBO/sd-panels metrics: - --- # sd-ner ## Model description This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang) and fine-tuned for token classification on the SourceData [sd-panels](https://huggingface.co/datasets/EMBO/sd-panels) dataset to perform Named Entity Recognition of bioentities. ## Intended uses & limitations #### How to use The intended use of this model is for Named Entity Recognition of biological entitie used in SourceData annotations (https://sourcedata.embo.org), including small molecules, gene products (genes and proteins), subcellular components, cell line and cell types, organ and tissues, species as well as experimental methods. To have a quick check of the model: ```python from transformers import pipeline, RobertaTokenizerFast, RobertaForTokenClassification example = """<s> F. Western blot of input and eluates of Upf1 domains purification in a Nmd4-HA strain. The band with the # might corresponds to a dimer of Upf1-CH, bands marked with a star correspond to residual signal with the anti-HA antibodies (Nmd4). Fragments in the eluate have a smaller size because the protein A part of the tag was removed by digestion with the TEV protease. G6PDH served as a loading control in the input samples </s>""" tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512) model = RobertaForTokenClassification.from_pretrained('EMBO/sd-ner') ner = pipeline('ner', model, tokenizer=tokenizer) res = ner(example) for r in res: print(r['word'], r['entity']) ``` #### Limitations and bias The model must be used with the `roberta-base` tokenizer. ## Training data The model was trained for token classification using the [EMBO/sd-panels dataset](https://huggingface.co/datasets/EMBO/biolang) wich includes manually annotated examples. ## Training procedure The training was run on a NVIDIA DGX Station with 4XTesla V100 GPUs. Training code is available at https://github.com/source-data/soda-roberta - Command: `python -m tokcl.train /data/json/sd_panels NER --num_train_epochs=3.5` - Tokenizer vocab size: 50265 - Training data: EMBO/biolang MLM - Training with 31410 examples. - Evaluating on 8861 examples. - Training on 15 features: O, I-SMALL_MOLECULE, B-SMALL_MOLECULE, I-GENEPROD, B-GENEPROD, I-SUBCELLULAR, B-SUBCELLULAR, I-CELL, B-CELL, I-TISSUE, B-TISSUE, I-ORGANISM, B-ORGANISM, I-EXP_ASSAY, B-EXP_ASSAY - Epochs: 3.5 - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `learning_rate`: 0.0001 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 ## Eval results On test set with `sklearn.metrics`: ``` precision recall f1-score support CELL 0.77 0.81 0.79 3477 EXP_ASSAY 0.71 0.70 0.71 7049 GENEPROD 0.86 0.90 0.88 16140 ORGANISM 0.80 0.82 0.81 2759 SMALL_MOLECULE 0.78 0.82 0.80 4446 SUBCELLULAR 0.71 0.75 0.73 2125 TISSUE 0.70 0.75 0.73 1971 micro avg 0.79 0.82 0.81 37967 macro avg 0.76 0.79 0.78 37967 weighted avg 0.79 0.82 0.81 37967 ```
textattack/roberta-base-rotten-tomatoes
textattack
2021-05-20T22:17:29Z
34
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
## TextAttack Model Card This `roberta-base` model was fine-tuned for sequence classificationusing TextAttack and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned for 10 epochs with a batch size of 64, a learning rate of 2e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.9033771106941839, as measured by the eval set accuracy, found after 2 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
textattack/roberta-base-imdb
textattack
2021-05-20T22:16:19Z
207
1
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
## TextAttack Model Card This `roberta-base` model was fine-tuned for sequence classification using TextAttack and the imdb dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 64, a learning rate of 3e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.91436, as measured by the eval set accuracy, found after 2 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
textattack/roberta-base-ag-news
textattack
2021-05-20T22:15:20Z
487
2
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
## TextAttack Model CardThis `roberta-base` model was fine-tuned for sequence classification using TextAttack and the ag_news dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 5e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.9469736842105263, as measured by the eval set accuracy, found after 4 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
textattack/roberta-base-WNLI
textattack
2021-05-20T22:13:50Z
42
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
## TextAttack Model Card This `roberta-base` model was fine-tuned for sequence classification using TextAttack and the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 5e-05, and a maximum sequence length of 256. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.5633802816901409, as measured by the eval set accuracy, found after 0 epoch. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
textattack/roberta-base-CoLA
textattack
2021-05-20T22:05:35Z
48,829
17
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
## TextAttack Model Cardand the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 32, a learning rate of 2e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.850431447746884, as measured by the eval set accuracy, found after 1 epoch. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
simonlevine/clinical-longformer
simonlevine
2021-05-20T21:25:09Z
19
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
- You'll need to instantiate a special RoBERTa class. Though technically a "Longformer", the elongated RoBERTa model will still need to be pulled in as such. - To do so, use the following classes: ```python class RobertaLongSelfAttention(LongformerSelfAttention): def forward( self, hidden_states, attention_mask=None, head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, output_attentions=False, ): return super().forward(hidden_states, attention_mask=attention_mask, output_attentions=output_attentions) class RobertaLongForMaskedLM(RobertaForMaskedLM): def __init__(self, config): super().__init__(config) for i, layer in enumerate(self.roberta.encoder.layer): # replace the `modeling_bert.BertSelfAttention` object with `LongformerSelfAttention` layer.attention.self = RobertaLongSelfAttention(config, layer_id=i) ``` - Then, pull the model as ```RobertaLongForMaskedLM.from_pretrained('simonlevine/bioclinical-roberta-long')``` - Now, it can be used as usual. Note you may get untrained weights warnings. - Note that you can replace ```RobertaForMaskedLM``` with a different task-specific RoBERTa from Huggingface, such as RobertaForSequenceClassification.
sramasamy8/testModel
sramasamy8
2021-05-20T20:58:24Z
4
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "exbert", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: en tags: - exbert license: apache-2.0 datasets: - bookcorpus - wikipedia --- # BERT base model (uncased) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-uncased') >>> unmasker("Hello I'm a [MASK] model.") [{'sequence': "[CLS] hello i'm a fashion model. [SEP]", 'score': 0.1073106899857521, 'token': 4827, 'token_str': 'fashion'}, {'sequence': "[CLS] hello i'm a role model. [SEP]", 'score': 0.08774490654468536, 'token': 2535, 'token_str': 'role'}, {'sequence': "[CLS] hello i'm a new model. [SEP]", 'score': 0.05338378623127937, 'token': 2047, 'token_str': 'new'}, {'sequence': "[CLS] hello i'm a super model. [SEP]", 'score': 0.04667217284440994, 'token': 3565, 'token_str': 'super'}, {'sequence': "[CLS] hello i'm a fine model. [SEP]", 'score': 0.027095865458250046, 'token': 2986, 'token_str': 'fine'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertModel.from_pretrained("bert-base-uncased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = TFBertModel.from_pretrained("bert-base-uncased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-uncased') >>> unmasker("The man worked as a [MASK].") [{'sequence': '[CLS] the man worked as a carpenter. [SEP]', 'score': 0.09747550636529922, 'token': 10533, 'token_str': 'carpenter'}, {'sequence': '[CLS] the man worked as a waiter. [SEP]', 'score': 0.0523831807076931, 'token': 15610, 'token_str': 'waiter'}, {'sequence': '[CLS] the man worked as a barber. [SEP]', 'score': 0.04962705448269844, 'token': 13362, 'token_str': 'barber'}, {'sequence': '[CLS] the man worked as a mechanic. [SEP]', 'score': 0.03788609802722931, 'token': 15893, 'token_str': 'mechanic'}, {'sequence': '[CLS] the man worked as a salesman. [SEP]', 'score': 0.037680890411138535, 'token': 18968, 'token_str': 'salesman'}] >>> unmasker("The woman worked as a [MASK].") [{'sequence': '[CLS] the woman worked as a nurse. [SEP]', 'score': 0.21981462836265564, 'token': 6821, 'token_str': 'nurse'}, {'sequence': '[CLS] the woman worked as a waitress. [SEP]', 'score': 0.1597415804862976, 'token': 13877, 'token_str': 'waitress'}, {'sequence': '[CLS] the woman worked as a maid. [SEP]', 'score': 0.1154729500412941, 'token': 10850, 'token_str': 'maid'}, {'sequence': '[CLS] the woman worked as a prostitute. [SEP]', 'score': 0.037968918681144714, 'token': 19215, 'token_str': 'prostitute'}, {'sequence': '[CLS] the woman worked as a cook. [SEP]', 'score': 0.03042375110089779, 'token': 5660, 'token_str': 'cook'}] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ## Evaluation results When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: | Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average | |:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:| | | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 | ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1810-04805, author = {Jacob Devlin and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language Understanding}, journal = {CoRR}, volume = {abs/1810.04805}, year = {2018}, url = {http://arxiv.org/abs/1810.04805}, archivePrefix = {arXiv}, eprint = {1810.04805}, timestamp = {Tue, 30 Oct 2018 20:39:56 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=bert-base-uncased"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
seyonec/ChemBERTa-zinc-base-v1
seyonec
2021-05-20T20:55:33Z
96,218
46
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "chemistry", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- tags: - chemistry --- # ChemBERTa: Training a BERT-like transformer model for masked language modelling of chemical SMILES strings. Deep learning for chemistry and materials science remains a novel field with lots of potiential. However, the popularity of transfer learning based methods in areas such as NLP and computer vision have not yet been effectively developed in computational chemistry + machine learning. Using HuggingFace's suite of models and the ByteLevel tokenizer, we are able to train on a large corpus of 100k SMILES strings from a commonly known benchmark dataset, ZINC. Training RoBERTa over 5 epochs, the model achieves a decent loss of 0.398, but may likely continue to decline if trained for a larger number of epochs. The model can predict tokens within a SMILES sequence/molecule, allowing for variants of a molecule within discoverable chemical space to be predicted. By applying the representations of functional groups and atoms learned by the model, we can try to tackle problems of toxicity, solubility, drug-likeness, and synthesis accessibility on smaller datasets using the learned representations as features for graph convolution and attention models on the graph structure of molecules, as well as fine-tuning of BERT. Finally, we propose the use of attention visualization as a helpful tool for chemistry practitioners and students to quickly identify important substructures in various chemical properties. Additionally, visualization of the attention mechanism have been seen through previous research as incredibly valuable towards chemical reaction classification. The applications of open-sourcing large-scale transformer models such as RoBERTa with HuggingFace may allow for the acceleration of these individual research directions. A link to a repository which includes the training, uploading and evaluation notebook (with sample predictions on compounds such as Remdesivir) can be found [here](https://github.com/seyonechithrananda/bert-loves-chemistry). All of the notebooks can be copied into a new Colab runtime for easy execution. Thanks for checking this out! - Seyone
pulp/CHILDES-ParentBERTo
pulp
2021-05-20T19:46:06Z
5
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
The language model trained on a fill-mask task with all the North American parent's data in CHILDES. The parent's data can be found here: https://github.com/xiaomeng-ma/CHILDES
prajjwal1/roberta-base-mnli
prajjwal1
2021-05-20T19:31:02Z
4
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
Roberta-base trained on MNLI. | Task | Accuracy | |---------|----------| | MNLI | 86.32 | | MNLI-mm | 86.43 | You can also check out: - `prajjwal1/roberta-base-mnli` - `prajjwal1/roberta-large-mnli` - `prajjwal1/albert-base-v2-mnli` - `prajjwal1/albert-base-v1-mnli` - `prajjwal1/albert-large-v2-mnli` [@prajjwal_1](https://twitter.com/prajjwal_1)
phiyodr/roberta-large-finetuned-squad2
phiyodr
2021-05-20T19:27:52Z
20
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "question-answering", "en", "dataset:squad2", "arxiv:1907.11692", "arxiv:1806.03822", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: en tags: - pytorch - question-answering datasets: - squad2 metrics: - exact - f1 widget: - text: "What discipline did Winkelmann create?" context: "Johann Joachim Winckelmann was a German art historian and archaeologist. He was a pioneering Hellenist who first articulated the difference between Greek, Greco-Roman and Roman art. The prophet and founding hero of modern archaeology, Winckelmann was one of the founders of scientific archaeology and first applied the categories of style on a large, systematic basis to the history of art." --- # roberta-large-finetuned-squad2 ## Model description This model is based on [roberta-large](https://huggingface.co/roberta-large) and was finetuned on [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/). The corresponding papers you can found [here (model)](https://arxiv.org/abs/1907.11692) and [here (data)](https://arxiv.org/abs/1806.03822). ## How to use ```python from transformers.pipelines import pipeline model_name = "phiyodr/roberta-large-finetuned-squad2" nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) inputs = { 'question': 'What discipline did Winkelmann create?', 'context': 'Johann Joachim Winckelmann was a German art historian and archaeologist. He was a pioneering Hellenist who first articulated the difference between Greek, Greco-Roman and Roman art. "The prophet and founding hero of modern archaeology", Winckelmann was one of the founders of scientific archaeology and first applied the categories of style on a large, systematic basis to the history of art. ' } nlp(inputs) ``` ## Training procedure ``` { "base_model": "roberta-large", "do_lower_case": True, "learning_rate": 3e-5, "num_train_epochs": 4, "max_seq_length": 384, "doc_stride": 128, "max_query_length": 64, "batch_size": 96 } ``` ## Eval results - Data: [dev-v2.0.json](https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json) - Script: [evaluate-v2.0.py](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/) (original script from [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/README.md)) ``` { "exact": 84.38473848227069, "f1": 87.89711571225455, "total": 11873, "HasAns_exact": 80.9885290148448, "HasAns_f1": 88.02335608157898, "HasAns_total": 5928, "NoAns_exact": 87.77123633305298, "NoAns_f1": 87.77123633305298, "NoAns_total": 5945 } ```
patrickvonplaten/norwegian-roberta-large
patrickvonplaten
2021-05-20T19:15:37Z
3
0
transformers
[ "transformers", "tensorboard", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
## Roberta-Large This repo trains [roberta-large](https://huggingface.co/roberta-large) from scratch on the [Norwegian training subset of Oscar](https://oscar-corpus.com/) containing roughly 4.7 GB of data. A ByteLevelBPETokenizer as shown in [this]( ) blog post was trained on the whole [Norwegian training subset of Oscar](https://oscar-corpus.com/). Training is done on a TPUv3-8 in Flax. The training script as well as the script to create a tokenizer are attached below. ### Run 1 ``` --weight_decay="0.01" --max_seq_length="128" --train_batch_size="1048" --eval_batch_size="1048" --learning_rate="1e-3" --warmup_steps="2000" --pad_to_max_length --num_train_epochs="12" --adam_beta1="0.9" --adam_beta2="0.98" ``` Trained for 12 epochs with each epoch including 8005 steps => Total of 96K steps. 1 epoch + eval takes roughly 2 hours 40 minutes => trained in total for 1 day and 8 hours. Final loss was 3.695. **Acc**: ![Acc](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/flax_experiments/norwegian_large_acc_1.svg) **Loss**: ![Loss](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/flax_experiments/norwegian_large_loss_1.svg) ### Run 2 ``` --weight_decay="0.01" --max_seq_length="128" --train_batch_size="1048" --eval_batch_size="1048" --learning_rate="5e-3" --warmup_steps="2000" --pad_to_max_length --num_train_epochs="7" --adam_beta1="0.9" --adam_beta2="0.98" ``` Trained for 7 epochs with each epoch including 8005 steps => Total of 96K steps. 1 epoch + eval takes roughly 2 hours 40 minutes => trained in total for 18 hours. Final loss was 2.216 and accuracy 0.58. **Acc**: ![Acc](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/flax_experiments/norwegian_large_acc_2.svg) **Loss**: ![Loss](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/flax_experiments/norwegian_large_loss_2.svg)
osanseviero/upload-to-hub
osanseviero
2021-05-20T19:13:12Z
3
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
Example card Second modification
nyu-mll/roberta-base-1B-2
nyu-mll
2021-05-20T19:04:39Z
5
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# RoBERTa Pretrained on Smaller Datasets We pretrain RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). We release 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: We combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1. ### Hyperparameters and Validation Perplexity The hyperparameters and validation perplexities corresponding to each model are as follows: | Model Name | Training Size | Model Size | Max Steps | Batch Size | Validation Perplexity | |--------------------------|---------------|------------|-----------|------------|-----------------------| | [roberta-base-1B-1][link-roberta-base-1B-1] | 1B | BASE | 100K | 512 | 3.93 | | [roberta-base-1B-2][link-roberta-base-1B-2] | 1B | BASE | 31K | 1024 | 4.25 | | [roberta-base-1B-3][link-roberta-base-1B-3] | 1B | BASE | 31K | 4096 | 3.84 | | [roberta-base-100M-1][link-roberta-base-100M-1] | 100M | BASE | 100K | 512 | 4.99 | | [roberta-base-100M-2][link-roberta-base-100M-2] | 100M | BASE | 31K | 1024 | 4.61 | | [roberta-base-100M-3][link-roberta-base-100M-3] | 100M | BASE | 31K | 512 | 5.02 | | [roberta-base-10M-1][link-roberta-base-10M-1] | 10M | BASE | 10K | 1024 | 11.31 | | [roberta-base-10M-2][link-roberta-base-10M-2] | 10M | BASE | 10K | 512 | 10.78 | | [roberta-base-10M-3][link-roberta-base-10M-3] | 10M | BASE | 31K | 512 | 11.58 | | [roberta-med-small-1M-1][link-roberta-med-small-1M-1] | 1M | MED-SMALL | 100K | 512 | 153.38 | | [roberta-med-small-1M-2][link-roberta-med-small-1M-2] | 1M | MED-SMALL | 10K | 512 | 134.18 | | [roberta-med-small-1M-3][link-roberta-med-small-1M-3] | 1M | MED-SMALL | 31K | 512 | 139.39 | The hyperparameters corresponding to model sizes mentioned above are as follows: | Model Size | L | AH | HS | FFN | P | |------------|----|----|-----|------|------| | BASE | 12 | 12 | 768 | 3072 | 125M | | MED-SMALL | 6 | 8 | 512 | 2048 | 45M | (AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters.) For other hyperparameters, we select: - Peak Learning rate: 5e-4 - Warmup Steps: 6% of max steps - Dropout: 0.1 [link-roberta-med-small-1M-1]: https://huggingface.co/nyu-mll/roberta-med-small-1M-1 [link-roberta-med-small-1M-2]: https://huggingface.co/nyu-mll/roberta-med-small-1M-2 [link-roberta-med-small-1M-3]: https://huggingface.co/nyu-mll/roberta-med-small-1M-3 [link-roberta-base-10M-1]: https://huggingface.co/nyu-mll/roberta-base-10M-1 [link-roberta-base-10M-2]: https://huggingface.co/nyu-mll/roberta-base-10M-2 [link-roberta-base-10M-3]: https://huggingface.co/nyu-mll/roberta-base-10M-3 [link-roberta-base-100M-1]: https://huggingface.co/nyu-mll/roberta-base-100M-1 [link-roberta-base-100M-2]: https://huggingface.co/nyu-mll/roberta-base-100M-2 [link-roberta-base-100M-3]: https://huggingface.co/nyu-mll/roberta-base-100M-3 [link-roberta-base-1B-1]: https://huggingface.co/nyu-mll/roberta-base-1B-1 [link-roberta-base-1B-2]: https://huggingface.co/nyu-mll/roberta-base-1B-2 [link-roberta-base-1B-3]: https://huggingface.co/nyu-mll/roberta-base-1B-3
nyu-mll/roberta-base-1B-1
nyu-mll
2021-05-20T19:03:06Z
4
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# RoBERTa Pretrained on Smaller Datasets We pretrain RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). We release 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: We combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1. ### Hyperparameters and Validation Perplexity The hyperparameters and validation perplexities corresponding to each model are as follows: | Model Name | Training Size | Model Size | Max Steps | Batch Size | Validation Perplexity | |--------------------------|---------------|------------|-----------|------------|-----------------------| | [roberta-base-1B-1][link-roberta-base-1B-1] | 1B | BASE | 100K | 512 | 3.93 | | [roberta-base-1B-2][link-roberta-base-1B-2] | 1B | BASE | 31K | 1024 | 4.25 | | [roberta-base-1B-3][link-roberta-base-1B-3] | 1B | BASE | 31K | 4096 | 3.84 | | [roberta-base-100M-1][link-roberta-base-100M-1] | 100M | BASE | 100K | 512 | 4.99 | | [roberta-base-100M-2][link-roberta-base-100M-2] | 100M | BASE | 31K | 1024 | 4.61 | | [roberta-base-100M-3][link-roberta-base-100M-3] | 100M | BASE | 31K | 512 | 5.02 | | [roberta-base-10M-1][link-roberta-base-10M-1] | 10M | BASE | 10K | 1024 | 11.31 | | [roberta-base-10M-2][link-roberta-base-10M-2] | 10M | BASE | 10K | 512 | 10.78 | | [roberta-base-10M-3][link-roberta-base-10M-3] | 10M | BASE | 31K | 512 | 11.58 | | [roberta-med-small-1M-1][link-roberta-med-small-1M-1] | 1M | MED-SMALL | 100K | 512 | 153.38 | | [roberta-med-small-1M-2][link-roberta-med-small-1M-2] | 1M | MED-SMALL | 10K | 512 | 134.18 | | [roberta-med-small-1M-3][link-roberta-med-small-1M-3] | 1M | MED-SMALL | 31K | 512 | 139.39 | The hyperparameters corresponding to model sizes mentioned above are as follows: | Model Size | L | AH | HS | FFN | P | |------------|----|----|-----|------|------| | BASE | 12 | 12 | 768 | 3072 | 125M | | MED-SMALL | 6 | 8 | 512 | 2048 | 45M | (AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters.) For other hyperparameters, we select: - Peak Learning rate: 5e-4 - Warmup Steps: 6% of max steps - Dropout: 0.1 [link-roberta-med-small-1M-1]: https://huggingface.co/nyu-mll/roberta-med-small-1M-1 [link-roberta-med-small-1M-2]: https://huggingface.co/nyu-mll/roberta-med-small-1M-2 [link-roberta-med-small-1M-3]: https://huggingface.co/nyu-mll/roberta-med-small-1M-3 [link-roberta-base-10M-1]: https://huggingface.co/nyu-mll/roberta-base-10M-1 [link-roberta-base-10M-2]: https://huggingface.co/nyu-mll/roberta-base-10M-2 [link-roberta-base-10M-3]: https://huggingface.co/nyu-mll/roberta-base-10M-3 [link-roberta-base-100M-1]: https://huggingface.co/nyu-mll/roberta-base-100M-1 [link-roberta-base-100M-2]: https://huggingface.co/nyu-mll/roberta-base-100M-2 [link-roberta-base-100M-3]: https://huggingface.co/nyu-mll/roberta-base-100M-3 [link-roberta-base-1B-1]: https://huggingface.co/nyu-mll/roberta-base-1B-1 [link-roberta-base-1B-2]: https://huggingface.co/nyu-mll/roberta-base-1B-2 [link-roberta-base-1B-3]: https://huggingface.co/nyu-mll/roberta-base-1B-3
nkoh01/MSRoberta
nkoh01
2021-05-20T18:51:20Z
8
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# MSRoBERTa Fine-tuned RoBERTa MLM model for [`Miscrosoft Sentence Completion Challenge`](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/MSR_SCCD.pdf). This model case-sensitive following the `Roberta-base` model. # Model description (taken from: [here](https://huggingface.co/roberta-base)) RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python from transformers import pipeline,AutoModelForMaskedLM,AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("nkoh01/MSRoberta") model = AutoModelForMaskedLM.from_pretrained("nkoh01/MSRoberta") unmasker = pipeline( "fill-mask", model=model, tokenizer=tokenizer ) unmasker("Hello, it is a <mask> to meet you.") [{'score': 0.9508683085441589, 'sequence': 'hello, it is a pleasure to meet you.', 'token': 10483, 'token_str': ' pleasure'}, {'score': 0.015089659951627254, 'sequence': 'hello, it is a privilege to meet you.', 'token': 9951, 'token_str': ' privilege'}, {'score': 0.013942377641797066, 'sequence': 'hello, it is a joy to meet you.', 'token': 5823, 'token_str': ' joy'}, {'score': 0.006964420434087515, 'sequence': 'hello, it is a delight to meet you.', 'token': 13213, 'token_str': ' delight'}, {'score': 0.0024567877408117056, 'sequence': 'hello, it is a honour to meet you.', 'token': 6671, 'token_str': ' honour'}] ``` ## Installations Make sure you run `!pip install transformers` command to install the transformers library before running the commands above. ## Bias and limitations Under construction.
neurocode/IsRoBERTa
neurocode
2021-05-20T18:50:32Z
4
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "is", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: is datasets: - Icelandic portion of the OSCAR corpus from INRIA - oscar --- # IsRoBERTa a RoBERTa-like masked language model Probably the first icelandic transformer language model! ## Overview **Language:** Icelandic **Downstream-task:** masked-lm **Training data:** OSCAR corpus **Code:** See [here](https://github.com/neurocode-io/icelandic-language-model) **Infrastructure**: 1x Nvidia K80 ## Hyperparameters ``` per_device_train_batch_size = 48 n_epochs = 1 vocab_size = 52.000 max_position_embeddings = 514 num_attention_heads = 12 num_hidden_layers = 6 type_vocab_size = 1 learning_rate=0.00005 ``` ## Usage ### In Transformers ```python from transformers import ( pipeline, AutoTokenizer, AutoModelWithLMHead ) model_name = "neurocode/IsRoBERTa" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelWithLMHead.from_pretrained(model_name) >>> fill_mask = pipeline( ... "fill-mask", ... model=model, ... tokenizer=tokenizer ... ) >>> result = fill_mask("Hann fór út að <mask>.") >>> result [ {'sequence': '<s>Hann fór út að nýju.</s>', 'score': 0.03395755589008331, 'token': 2219, 'token_str': 'Ġnýju'}, {'sequence': '<s>Hann fór út að undanförnu.</s>', 'score': 0.029087543487548828, 'token': 7590, 'token_str': 'Ġundanförnu'}, {'sequence': '<s>Hann fór út að lokum.</s>', 'score': 0.024420788511633873, 'token': 4384, 'token_str': 'Ġlokum'}, {'sequence': '<s>Hann fór út að þessu.</s>', 'score': 0.021231256425380707, 'token': 921, 'token_str': 'Ġþessu'}, {'sequence': '<s>Hann fór út að honum.</s>', 'score': 0.0205782949924469, 'token': 1136, 'token_str': 'Ġhonum'} ] ``` ## Authors Bobby Donchev: `contact [at] donchev.is` Elena Cramer: `elena.cramer [at] neurocode.io` ## About us We bring AI software for our customers live Our focus: AI software development Get in touch: [LinkedIn](https://de.linkedin.com/company/neurocodeio) | [Website](https://neurocode.io)
mudes/en-large
mudes
2021-05-20T18:36:06Z
5
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "token-classification", "mudes", "en", "arxiv:2102.09665", "arxiv:2104.04630", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: en tags: - mudes license: apache-2.0 --- # MUDES - {Mu}ltilingual {De}tection of Offensive {S}pans We provide state-of-the-art models to detect toxic spans in social media texts. We introduce our framework in [this paper](https://arxiv.org/abs/2102.09665). We have evaluated our models on Toxic Spans task at SemEval 2021 (Task 5). Our participation in the task is detailed in [this paper](https://arxiv.org/abs/2104.04630). ## Usage You can use this model when you have [MUDES](https://github.com/TharinduDR/MUDES) installed: ```bash pip install mudes ``` Then you can use the model like this: ```python from mudes.app.mudes_app import MUDESApp app = MUDESApp("en-large", use_cuda=False) print(app.predict_toxic_spans("You motherfucking cunt", spans=True)) ``` ## System Demonstration An experimental demonstration interface called MUDES-UI has been released on [GitHub](https://github.com/TharinduDR/MUDES-UI) and can be checked out in [here](http://rgcl.wlv.ac.uk/mudes/). ## Citing & Authors If you find this model helpful, feel free to cite our publications ```bibtex @inproceedings{ranasinghemudes, title={{MUDES: Multilingual Detection of Offensive Spans}}, author={Tharindu Ranasinghe and Marcos Zampieri}, booktitle={Proceedings of NAACL}, year={2021} } ``` ```bibtex @inproceedings{ranasinghe2021semeval, title={{WLV-RIT at SemEval-2021 Task 5: A Neural Transformer Framework for Detecting Toxic Spans}}, author = {Ranasinghe, Tharindu and Sarkar, Diptanu and Zampieri, Marcos and Ororbia, Alex}, booktitle={Proceedings of SemEval}, year={2021} } ```
mrm8488/roberta-base-1B-1-finetuned-squadv1
mrm8488
2021-05-20T18:26:13Z
35
1
transformers
[ "transformers", "pytorch", "jax", "roberta", "question-answering", "en", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: en --- # RoBERTa-base (1B-1) + SQuAD v1 ❓ [roberta-base-1B-1](https://huggingface.co/nyu-mll/roberta-base-1B-1) fine-tuned on [SQUAD v1.1 dataset](https://rajpurkar.github.io/SQuAD-explorer/explore/1.1/dev/) for **Q&A** downstream task. ## Details of the downstream task (Q&A) - Model 🧠 RoBERTa Pretrained on Smaller Datasets [NYU Machine Learning for Language](https://huggingface.co/nyu-mll) pretrained RoBERTa on smaller datasets (1M, 10M, 100M, 1B tokens). They released 3 models with lowest perplexities for each pretraining data size out of 25 runs (or 10 in the case of 1B tokens). The pretraining data reproduces that of BERT: They combine English Wikipedia and a reproduction of BookCorpus using texts from smashwords in a ratio of approximately 3:1. ## Details of the downstream task (Q&A) - Dataset 📚 **S**tanford **Q**uestion **A**nswering **D**ataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. SQuAD v1.1 contains **100,000+** question-answer pairs on **500+** articles. ## Model training 🏋️‍ The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command: ```bash python transformers/examples/question-answering/run_squad.py \ --model_type roberta \ --model_name_or_path 'nyu-mll/roberta-base-1B-1' \ --do_eval \ --do_train \ --do_lower_case \ --train_file /content/dataset/train-v1.1.json \ --predict_file /content/dataset/dev-v1.1.json \ --per_gpu_train_batch_size 16 \ --learning_rate 3e-5 \ --num_train_epochs 10 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /content/output \ --overwrite_output_dir \ --save_steps 1000 ``` ## Test set Results 🧾 | Metric | # Value | | ------ | --------- | | **EM** | **72.62** | | **F1** | **82.19** | ```json { 'exact': 72.62062440870388, 'f1': 82.19430877136834, 'total': 10570, 'HasAns_exact': 72.62062440870388, 'HasAns_f1': 82.19430877136834, 'HasAns_total': 10570, 'best_exact': 72.62062440870388, 'best_exact_thresh': 0.0, 'best_f1': 82.19430877136834, 'best_f1_thresh': 0.0 } ``` ### Model in action 🚀 Fast usage with **pipelines**: ```python from transformers import pipeline QnA_pipeline = pipeline('question-answering', model='mrm8488/roberta-base-1B-1-finetuned-squadv1') QnA_pipeline({ 'context': 'A new strain of flu that has the potential to become a pandemic has been identified in China by scientists.', 'question': 'What has been discovered by scientists from China ?' }) # Output: {'answer': 'A new strain of flu', 'end': 19, 'score': 0.04702283976040074, 'start': 0} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/distilroberta-finetuned-tweets-hate-speech
mrm8488
2021-05-20T18:25:15Z
6
6
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "twitter", "hate", "speech", "en", "dataset:tweets_hate_speech_detection", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: en tags: - twitter - hate - speech datasets: - tweets_hate_speech_detection widget: - text: "the fuck done with #mansplaining and other bullshit." --- # distilroberta-base fine-tuned on tweets_hate_speech_detection dataset for hate speech detection Validation accuray: 0.98
mrm8488/codebert-base-finetuned-detect-insecure-code
mrm8488
2021-05-20T18:19:02Z
166
28
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "en", "dataset:codexglue", "arxiv:2002.08155", "arxiv:1907.11692", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: en datasets: - codexglue --- # CodeBERT fine-tuned for Insecure Code Detection 💾⛔ [codebert-base](https://huggingface.co/microsoft/codebert-base) fine-tuned on [CodeXGLUE -- Defect Detection](https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Defect-detection) dataset for **Insecure Code Detection** downstream task. ## Details of [CodeBERT](https://arxiv.org/abs/2002.08155) We present CodeBERT, a bimodal pre-trained model for programming language (PL) and nat-ural language (NL). CodeBERT learns general-purpose representations that support downstream NL-PL applications such as natural language codesearch, code documentation generation, etc. We develop CodeBERT with Transformer-based neural architecture, and train it with a hybrid objective function that incorporates the pre-training task of replaced token detection, which is to detect plausible alternatives sampled from generators. This enables us to utilize both bimodal data of NL-PL pairs and unimodal data, where the former provides input tokens for model training while the latter helps to learn better generators. We evaluate CodeBERT on two NL-PL applications by fine-tuning model parameters. Results show that CodeBERT achieves state-of-the-art performance on both natural language code search and code documentation generation tasks. Furthermore, to investigate what type of knowledge is learned in CodeBERT, we construct a dataset for NL-PL probing, and evaluate in a zero-shot setting where parameters of pre-trained models are fixed. Results show that CodeBERT performs better than previous pre-trained models on NL-PL probing. ## Details of the downstream task (code classification) - Dataset 📚 Given a source code, the task is to identify whether it is an insecure code that may attack software systems, such as resource leaks, use-after-free vulnerabilities and DoS attack. We treat the task as binary classification (0/1), where 1 stands for insecure code and 0 for secure code. The [dataset](https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Defect-detection) used comes from the paper [*Devign*: Effective Vulnerability Identification by Learning Comprehensive Program Semantics via Graph Neural Networks](http://papers.nips.cc/paper/9209-devign-effective-vulnerability-identification-by-learning-comprehensive-program-semantics-via-graph-neural-networks.pdf). All projects are combined and splitted 80%/10%/10% for training/dev/test. Data statistics of the dataset are shown in the below table: | | #Examples | | ----- | :-------: | | Train | 21,854 | | Dev | 2,732 | | Test | 2,732 | ## Test set metrics 🧾 | Methods | ACC | | -------- | :-------: | | BiLSTM | 59.37 | | TextCNN | 60.69 | | [RoBERTa](https://arxiv.org/pdf/1907.11692.pdf) | 61.05 | | [CodeBERT](https://arxiv.org/pdf/2002.08155.pdf) | 62.08 | | [Ours](https://huggingface.co/mrm8488/codebert-base-finetuned-detect-insecure-code) | **65.30** | ## Model in Action 🚀 ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch import numpy as np tokenizer = AutoTokenizer.from_pretrained('mrm8488/codebert-base-finetuned-detect-insecure-code') model = AutoModelForSequenceClassification.from_pretrained('mrm8488/codebert-base-finetuned-detect-insecure-code') inputs = tokenizer("your code here", return_tensors="pt", truncation=True, padding='max_length') labels = torch.tensor([1]).unsqueeze(0) # Batch size 1 outputs = model(**inputs, labels=labels) loss = outputs.loss logits = outputs.logits print(np.argmax(logits.detach().numpy())) ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/codeBERTaJS
mrm8488
2021-05-20T18:17:36Z
10
6
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "javascript", "code", "arxiv:1909.09436", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: code thumbnail: tags: - javascript - code widget: - text: "async function createUser(req, <mask>) { if (!validUser(req.body.user)) { return res.status(400); } user = userService.createUser(req.body.user); return res.json(user); }" --- # CodeBERTaJS CodeBERTaJS is a RoBERTa-like model trained on the [CodeSearchNet](https://github.blog/2019-09-26-introducing-the-codesearchnet-challenge/) dataset from GitHub for `javaScript` by [Manuel Romero](https://twitter.com/mrm8488) The **tokenizer** is a Byte-level BPE tokenizer trained on the corpus using Hugging Face `tokenizers`. Because it is trained on a corpus of code (vs. natural language), it encodes the corpus efficiently (the sequences are between 33% to 50% shorter, compared to the same corpus tokenized by gpt2/roberta). The (small) **model** is a 6-layer, 84M parameters, RoBERTa-like Transformer model – that’s the same number of layers & heads as DistilBERT – initialized from the default initialization settings and trained from scratch on the full `javascript` corpus (120M after preproccessing) for 2 epochs. ## Quick start: masked language modeling prediction ```python JS_CODE = """ async function createUser(req, <mask>) { if (!validUser(req.body.user)) { \t return res.status(400); } user = userService.createUser(req.body.user); return res.json(user); } """.lstrip() ``` ### Does the model know how to complete simple JS/express like code? ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="mrm8488/codeBERTaJS", tokenizer="mrm8488/codeBERTaJS" ) fill_mask(JS_CODE) ## Top 5 predictions: # 'res' # prob 0.069489665329 'next' 'req' 'user' ',req' ``` ### Yes! That was easy 🎉 Let's try with another example ```python JS_CODE_= """ function getKeys(obj) { keys = []; for (var [key, value] of Object.entries(obj)) { keys.push(<mask>); } return keys } """.lstrip() ``` Results: ```python 'obj', 'key', ' value', 'keys', 'i' ``` > Not so bad! Right token was predicted as second option! 🎉 ## This work is heavely inspired on [codeBERTa](https://github.com/huggingface/transformers/blob/master/model_cards/huggingface/CodeBERTa-small-v1/README.md) by huggingface team <br> ## CodeSearchNet citation <details> ```bibtex @article{husain_codesearchnet_2019, \ttitle = {{CodeSearchNet} {Challenge}: {Evaluating} the {State} of {Semantic} {Code} {Search}}, \tshorttitle = {{CodeSearchNet} {Challenge}}, \turl = {http://arxiv.org/abs/1909.09436}, \turldate = {2020-03-12}, \tjournal = {arXiv:1909.09436 [cs, stat]}, \tauthor = {Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc}, \tmonth = sep, \tyear = {2019}, \tnote = {arXiv: 1909.09436}, } ``` </details> > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/RuPERTa-base-finetuned-squadv2
mrm8488
2021-05-20T18:14:42Z
6
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "question-answering", "es", "dataset:squad_v2", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: es datasets: - squad_v2 ---
mrm8488/RuPERTa-base-finetuned-squadv1
mrm8488
2021-05-20T18:13:28Z
14
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "question-answering", "es", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: es datasets: - squad ---
mrm8488/RuPERTa-base-finetuned-pos
mrm8488
2021-05-20T18:08:34Z
17
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "token-classification", "es", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: es thumbnail: --- # RuPERTa-base (Spanish RoBERTa) + POS 🎃🏷 This model is a fine-tuned on [CONLL CORPORA](https://www.kaggle.com/nltkdata/conll-corpora) version of [RuPERTa-base](https://huggingface.co/mrm8488/RuPERTa-base) for **POS** downstream task. ## Details of the downstream task (POS) - Dataset - [Dataset: CONLL Corpora ES](https://www.kaggle.com/nltkdata/conll-corpora) 📚 | Dataset | # Examples | | ---------------------- | ----- | | Train | 445 K | | Dev | 55 K | - [Fine-tune on NER script provided by Huggingface](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner_old.py) - Labels covered: ``` ADJ ADP ADV AUX CCONJ DET INTJ NOUN NUM PART PRON PROPN PUNCT SCONJ SYM VERB ``` ## Metrics on evaluation set 🧾 | Metric | # score | | :------------------------------------------------------------------------------------: | :-------: | | F1 | **97.39** | Precision | **97.47** | | Recall | **9732** | ## Model in action 🔨 Example of usage ```python import torch from transformers import AutoModelForTokenClassification, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('mrm8488/RuPERTa-base-finetuned-pos') model = AutoModelForTokenClassification.from_pretrained('mrm8488/RuPERTa-base-finetuned-pos') id2label = { "0": "O", "1": "ADJ", "2": "ADP", "3": "ADV", "4": "AUX", "5": "CCONJ", "6": "DET", "7": "INTJ", "8": "NOUN", "9": "NUM", "10": "PART", "11": "PRON", "12": "PROPN", "13": "PUNCT", "14": "SCONJ", "15": "SYM", "16": "VERB" } text ="Mis amigos están pensando viajar a Londres este verano." input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0) outputs = model(input_ids) last_hidden_states = outputs[0] for m in last_hidden_states: for index, n in enumerate(m): if(index > 0 and index <= len(text.split(" "))): print(text.split(" ")[index-1] + ": " + id2label[str(torch.argmax(n).item())]) ''' Output: -------- Mis: NUM amigos: PRON están: AUX pensando: ADV viajar: VERB a: ADP Londres: PROPN este: DET verano..: NOUN ''' ``` Yeah! Not too bad 🎉 > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/RuPERTa-base-finetuned-pawsx-es
mrm8488
2021-05-20T18:07:14Z
25
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "nli", "es", "dataset:xtreme", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: es datasets: - xtreme tags: - nli widget: - text: "En 2009 se mudó a Filadelfia y en la actualidad vive en Nueva York. Se mudó nuevamente a Filadelfia en 2009 y ahora vive en la ciudad de Nueva York." --- # RuPERTa-base fine-tuned on PAWS-X-es for Paraphrase Identification (NLI)
mrm8488/RoBasquERTa
mrm8488
2021-05-20T18:05:08Z
4
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "eu", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: eu widget: - text: "Euskara da Euskal Herriko <mask> ofiziala" - text: "Gaur egun, Euskadik Espainia osoko ekonomia <mask> du" --- # RoBasquERTa: RoBERTa-like Language model trained on OSCAR Basque corpus
mrm8488/CodeBERTaPy
mrm8488
2021-05-20T18:01:23Z
25
3
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "code", "arxiv:1909.09436", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: code thumbnail: --- # CodeBERTaPy CodeBERTaPy is a RoBERTa-like model trained on the [CodeSearchNet](https://github.blog/2019-09-26-introducing-the-codesearchnet-challenge/) dataset from GitHub for `python` by [Manuel Romero](https://twitter.com/mrm8488) The **tokenizer** is a Byte-level BPE tokenizer trained on the corpus using Hugging Face `tokenizers`. Because it is trained on a corpus of code (vs. natural language), it encodes the corpus efficiently (the sequences are between 33% to 50% shorter, compared to the same corpus tokenized by gpt2/roberta). The (small) **model** is a 6-layer, 84M parameters, RoBERTa-like Transformer model – that’s the same number of layers & heads as DistilBERT – initialized from the default initialization settings and trained from scratch on the full `python` corpus for 4 epochs. ## Quick start: masked language modeling prediction ```python PYTHON_CODE = """ fruits = ['apples', 'bananas', 'oranges'] for idx, <mask> in enumerate(fruits): print("index is %d and value is %s" % (idx, val)) """.lstrip() ``` ### Does the model know how to complete simple Python code? ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="mrm8488/CodeBERTaPy", tokenizer="mrm8488/CodeBERTaPy" ) fill_mask(PYTHON_CODE) ## Top 5 predictions: 'val' # prob 0.980728805065155 'value' 'idx' ',val' '_' ``` ### Yes! That was easy 🎉 Let's try with another Flask like example ```python PYTHON_CODE2 = """ @app.route('/<name>') def hello_name(name): return "Hello {}!".format(<mask>) if __name__ == '__main__': app.run() """.lstrip() fill_mask(PYTHON_CODE2) ## Top 5 predictions: 'name' # prob 0.9961813688278198 ' name' 'url' 'description' 'self' ``` ### Yeah! It works 🎉 Let's try with another Tensorflow/Keras like example ```python PYTHON_CODE3=""" model = keras.Sequential([ keras.layers.Flatten(input_shape=(28, 28)), keras.layers.<mask>(128, activation='relu'), keras.layers.Dense(10, activation='softmax') ]) """.lstrip() fill_mask(PYTHON_CODE3) ## Top 5 predictions: 'Dense' # prob 0.4482928514480591 'relu' 'Flatten' 'Activation' 'Conv' ``` > Great! 🎉 ## This work is heavily inspired on [CodeBERTa](https://github.com/huggingface/transformers/blob/master/model_cards/huggingface/CodeBERTa-small-v1/README.md) by huggingface team <br> ## CodeSearchNet citation <details> ```bibtex @article{husain_codesearchnet_2019, title = {{CodeSearchNet} {Challenge}: {Evaluating} the {State} of {Semantic} {Code} {Search}}, shorttitle = {{CodeSearchNet} {Challenge}}, url = {http://arxiv.org/abs/1909.09436}, urldate = {2020-03-12}, journal = {arXiv:1909.09436 [cs, stat]}, author = {Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc}, month = sep, year = {2019}, note = {arXiv: 1909.09436}, } ``` </details> > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
julien-c/dummy-unknown
julien-c
2021-05-20T17:31:14Z
61,031
0
transformers
[ "transformers", "pytorch", "tf", "jax", "roberta", "fill-mask", "ci", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- tags: - ci --- ## Dummy model used for unit testing and CI ```python import json import os from transformers import RobertaConfig, RobertaForMaskedLM, TFRobertaForMaskedLM DIRNAME = "./dummy-unknown" config = RobertaConfig(10, 20, 1, 1, 40) model = RobertaForMaskedLM(config) model.save_pretrained(DIRNAME) tf_model = TFRobertaForMaskedLM.from_pretrained(DIRNAME, from_pt=True) tf_model.save_pretrained(DIRNAME) # Tokenizer: vocab = [ "l", "o", "w", "e", "r", "s", "t", "i", "d", "n", "\u0120", "\u0120l", "\u0120n", "\u0120lo", "\u0120low", "er", "\u0120lowest", "\u0120newer", "\u0120wider", "<unk>", ] vocab_tokens = dict(zip(vocab, range(len(vocab)))) merges = ["#version: 0.2", "\u0120 l", "\u0120l o", "\u0120lo w", "e r", ""] vocab_file = os.path.join(DIRNAME, "vocab.json") merges_file = os.path.join(DIRNAME, "merges.txt") with open(vocab_file, "w", encoding="utf-8") as fp: fp.write(json.dumps(vocab_tokens) + "\n") with open(merges_file, "w", encoding="utf-8") as fp: fp.write("\n".join(merges)) ```
jpcorb20/toxic-detector-distilroberta
jpcorb20
2021-05-20T17:25:58Z
88
1
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# Distilroberta for toxic comment detection See my GitHub repo [toxic-comment-server](https://github.com/jpcorb20/toxic-comment-server) The model was trained from [DistilRoberta](https://huggingface.co/distilroberta-base) on [Kaggle Toxic Comments](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge) with the BCEWithLogits loss for Multi-Label prediction. Thus, please use the sigmoid activation on the logits (not made to use the softmax output, e.g. like the HF widget). ## Evaluation F1 scores: toxic: 0.72 severe_toxic: 0.38 obscene: 0.72 threat: 0.52 insult: 0.69 identity_hate: 0.60 Macro-F1: 0.61
softcatala/julibert
softcatala
2021-05-20T17:19:38Z
8
2
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "ca", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: ca --- ## Introduction Download the model here: * Catalan Roberta model: [julibert-2020-11-10.zip](https://www.softcatala.org/pub/softcatala/julibert/julibert-2020-11-10.zip) ## What's this? Source code: https://github.com/Softcatala/julibert * Corpus: Oscar Catalan Corpus (3,8G) * Model type: Roberta * Vocabulary size: 50265 * Steps: 500000
jason9693/SoongsilBERT-nsmc-base
jason9693
2021-05-20T17:08:31Z
6
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# Finetuning ## Result ### Base Model | | Size | **NSMC**<br/>(acc) | **Naver NER**<br/>(F1) | **PAWS**<br/>(acc) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) | **Question Pair**<br/>(acc) | **KorQuaD (Dev)**<br/>(EM/F1) | **Korean-Hate-Speech (Dev)**<br/>(F1) | | :-------------------- | :---: | :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: | :---------------------------: | :-----------------------------------: | | KoBERT | 351M | 89.59 | 87.92 | 81.25 | 79.62 | 81.59 | 94.85 | 51.75 / 79.15 | 66.21 | | XLM-Roberta-Base | 1.03G | 89.03 | 86.65 | 82.80 | 80.23 | 78.45 | 93.80 | 64.70 / 88.94 | 64.06 | | HanBERT | 614M | 90.06 | 87.70 | 82.95 | 80.32 | 82.73 | 94.72 | 78.74 / 92.02 | 68.32 | | KoELECTRA-Base-v3 | 431M | 90.63 | 88.11 | 84.45 | 82.24 | 85.53 | 95.25 | 84.83 / 93.45 | 67.61 | | Soongsil-BERT | 370M | **91.2** | - | - | - | 76 | 94 | - | **69** | ### Small Model | | Size | **NSMC**<br/>(acc) | **Naver NER**<br/>(F1) | **PAWS**<br/>(acc) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) | **Question Pair**<br/>(acc) | **KorQuaD (Dev)**<br/>(EM/F1) | **Korean-Hate-Speech (Dev)**<br/>(F1) | | :--------------------- | :--: | :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: | :---------------------------: | :-----------------------------------: | | DistilKoBERT | 108M | 88.60 | 84.65 | 60.50 | 72.00 | 72.59 | 92.48 | 54.40 / 77.97 | 60.72 | | KoELECTRA-Small-v3 | 54M | 89.36 | 85.40 | 77.45 | 78.60 | 80.79 | 94.85 | 82.11 / 91.13 | 63.07 | | Soongsil-BERT | 213M | **90.7** | 84 | 69.1 | 76 | - | 92 | - | **66** | ## Reference - [Transformers Examples](https://github.com/huggingface/transformers/blob/master/examples/README.md) - [NSMC](https://github.com/e9t/nsmc) - [Naver NER Dataset](https://github.com/naver/nlp-challenge) - [PAWS](https://github.com/google-research-datasets/paws) - [KorNLI/KorSTS](https://github.com/kakaobrain/KorNLUDatasets) - [Question Pair](https://github.com/songys/Question_pair) - [KorQuad](https://korquad.github.io/category/1.0_KOR.html) - [Korean Hate Speech](https://github.com/kocohub/korean-hate-speech) - [KoELECTRA](https://github.com/monologg/KoELECTRA) - [KoBERT](https://github.com/SKTBrain/KoBERT) - [HanBERT](https://github.com/tbai2019/HanBert-54k-N) - [HanBert Transformers](https://github.com/monologg/HanBert-Transformers)
idjotherwise/autonlp-reading_prediction-172506
idjotherwise
2021-05-20T16:57:07Z
6
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "autonlp", "en", "dataset:idjotherwise/autonlp-data-reading_prediction", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - idjotherwise/autonlp-data-reading_prediction --- # Model Trained Using AutoNLP - Problem type: Single Column Regression - Model ID: 172506 ## Validation Metrics - Loss: 0.03257797285914421 - MSE: 0.03257797285914421 - MAE: 0.14246532320976257 - R2: 0.9693824457290849 - RMSE: 0.18049369752407074 - Explained Variance: 0.9699198007583618 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/idjotherwise/autonlp-reading_prediction-172506 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("idjotherwise/autonlp-reading_prediction-172506") tokenizer = AutoTokenizer.from_pretrained("idjotherwise/autonlp-reading_prediction-172506") inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
iarfmoose/roberta-small-bulgarian
iarfmoose
2021-05-20T16:54:01Z
6
0
transformers
[ "transformers", "pytorch", "tf", "jax", "roberta", "fill-mask", "bg", "arxiv:1907.11692", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: bg --- # RoBERTa-small-bulgarian The RoBERTa model was originally introduced in [this paper](https://arxiv.org/abs/1907.11692). This is a smaller version of [RoBERTa-base-bulgarian](https://huggingface.co/iarfmoose/roberta-small-bulgarian) with only 6 hidden layers, but similar performance. ## Intended uses This model can be used for cloze tasks (masked language modeling) or finetuned on other tasks in Bulgarian. ## Limitations and bias The training data is unfiltered text from the internet and may contain all sorts of biases. ## Training data This model was trained on the following data: - [bg_dedup from OSCAR](https://oscar-corpus.com/) - [Newscrawl 1 million sentences 2017 from Leipzig Corpora Collection](https://wortschatz.uni-leipzig.de/en/download/bulgarian) - [Wikipedia 1 million sentences 2016 from Leipzig Corpora Collection](https://wortschatz.uni-leipzig.de/en/download/bulgarian) ## Training procedure The model was pretrained using a masked language-modeling objective with dynamic masking as described [here](https://huggingface.co/roberta-base#preprocessing) It was trained for 160k steps. The batch size was limited to 8 due to GPU memory limitations.
iarfmoose/roberta-base-bulgarian
iarfmoose
2021-05-20T16:50:24Z
29
1
transformers
[ "transformers", "pytorch", "tf", "jax", "roberta", "fill-mask", "bg", "arxiv:1907.11692", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: bg --- # RoBERTa-base-bulgarian The RoBERTa model was originally introduced in [this paper](https://arxiv.org/abs/1907.11692). This is a version of [RoBERTa-base](https://huggingface.co/roberta-base) pretrained on Bulgarian text. ## Intended uses This model can be used for cloze tasks (masked language modeling) or finetuned on other tasks in Bulgarian. ## Limitations and bias The training data is unfiltered text from the internet and may contain all sorts of biases. ## Training data This model was trained on the following data: - [bg_dedup from OSCAR](https://oscar-corpus.com/) - [Newscrawl 1 million sentences 2017 from Leipzig Corpora Collection](https://wortschatz.uni-leipzig.de/en/download/bulgarian) - [Wikipedia 1 million sentences 2016 from Leipzig Corpora Collection](https://wortschatz.uni-leipzig.de/en/download/bulgarian) ## Training procedure The model was pretrained using a masked language-modeling objective with dynamic masking as described [here](https://huggingface.co/roberta-base#preprocessing) It was trained for 200k steps. The batch size was limited to 8 due to GPU memory limitations.
iarfmoose/roberta-base-bulgarian-pos
iarfmoose
2021-05-20T16:49:07Z
14
0
transformers
[ "transformers", "pytorch", "tf", "jax", "roberta", "token-classification", "bg", "arxiv:1907.11692", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: bg --- # RoBERTa-base-bulgarian-POS The RoBERTa model was originally introduced in [this paper](https://arxiv.org/abs/1907.11692). This model is a version of [RoBERTa-base-Bulgarian](https://huggingface.co/iarfmoose/roberta-base-bulgarian) fine-tuned for part-of-speech tagging. ## Intended uses The model can be used to predict part-of-speech tags in Bulgarian text. Since the tokenizer uses byte-pair encoding, each word in the text may be split into more than one token. When predicting POS-tags, the last token from each word can be used. Using the last token was found to slightly outperform predictions based on the first token. An example of this can be found [here](https://github.com/iarfmoose/bulgarian-nlp/blob/master/models/postagger.py). ## Limitations and bias The pretraining data is unfiltered text from the internet and may contain all sorts of biases. ## Training data In addition to the pretraining data used in [RoBERTa-base-Bulgarian]([RoBERTa-base-Bulgarian](https://huggingface.co/iarfmoose/roberta-base-bulgarian)), the model was trained on the UPOS tags from [UD_Bulgarian-BTB](https://github.com/UniversalDependencies/UD_Bulgarian-BTB). ## Training procedure The model was trained for 5 epochs over the training set. The loss was calculated based on label predictions for the last POS-tag for each word. The model achieves 97% on the test set.
hashk1/EsperBERTo-malgranda
hashk1
2021-05-20T16:38:46Z
4
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "eo", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: eo thumbnail: https://huggingface.co/blog/assets/01_how-to-train/EsperBERTo-thumbnail-v2.png widget: - text: "Ĉu vi paloras la <mask> Esperanto?" --- ## EsperBERTo: RoBERTa-like Language model trained on Esperanto
giganticode/StackOBERTflow-comments-small-v1
giganticode
2021-05-20T16:33:56Z
10
1
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# StackOBERTflow-comments-small StackOBERTflow is a RoBERTa model trained on StackOverflow comments. A Byte-level BPE tokenizer with dropout was used (using the `tokenizers` package). The model is *small*, i.e. has only 6-layers and the maximum sequence length was restricted to 256 tokens. The model was trained for 6 epochs on several GBs of comments from the StackOverflow corpus. ## Quick start: masked language modeling prediction ```python from transformers import pipeline from pprint import pprint COMMENT = "You really should not do it this way, I would use <mask> instead." fill_mask = pipeline( "fill-mask", model="giganticode/StackOBERTflow-comments-small-v1", tokenizer="giganticode/StackOBERTflow-comments-small-v1" ) pprint(fill_mask(COMMENT)) # [{'score': 0.019997311756014824, # 'sequence': '<s> You really should not do it this way, I would use jQuery instead.</s>', # 'token': 1738}, # {'score': 0.01693696901202202, # 'sequence': '<s> You really should not do it this way, I would use arrays instead.</s>', # 'token': 2844}, # {'score': 0.013411642983555794, # 'sequence': '<s> You really should not do it this way, I would use CSS instead.</s>', # 'token': 2254}, # {'score': 0.013224546797573566, # 'sequence': '<s> You really should not do it this way, I would use it instead.</s>', # 'token': 300}, # {'score': 0.011984303593635559, # 'sequence': '<s> You really should not do it this way, I would use classes instead.</s>', # 'token': 1779}] ```
elgeish/cs224n-squad2.0-roberta-base
elgeish
2021-05-20T16:16:38Z
12
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "question-answering", "arxiv:2004.07067", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
## CS224n SQuAD2.0 Project Dataset The goal of this model is to save CS224n students GPU time when establishing baselines to beat for the [Default Final Project](http://web.stanford.edu/class/cs224n/project/default-final-project-handout.pdf). The training set used to fine-tune this model is the same as the [official one](https://rajpurkar.github.io/SQuAD-explorer/); however, evaluation and model selection were performed using roughly half of the official dev set, 6078 examples, picked at random. The data files can be found at <https://github.com/elgeish/squad/tree/master/data> — this is the Winter 2020 version. Given that the official SQuAD2.0 dev set contains the project's test set, students must make sure not to use the official SQuAD2.0 dev set in any way — including the use of models fine-tuned on the official SQuAD2.0, since they used the official SQuAD2.0 dev set for model selection. ## Results ```json { "exact": 75.32082922013821, "f1": 78.66699523704254, "total": 6078, "HasAns_exact": 74.84536082474227, "HasAns_f1": 81.83436324767868, "HasAns_total": 2910, "NoAns_exact": 75.75757575757575, "NoAns_f1": 75.75757575757575, "NoAns_total": 3168, "best_exact": 75.32082922013821, "best_exact_thresh": 0.0, "best_f1": 78.66699523704266, "best_f1_thresh": 0.0 } ``` ## Notable Arguments ```json { "do_lower_case": true, "doc_stride": 128, "fp16": false, "fp16_opt_level": "O1", "gradient_accumulation_steps": 24, "learning_rate": 3e-05, "max_answer_length": 30, "max_grad_norm": 1, "max_query_length": 64, "max_seq_length": 384, "model_name_or_path": "roberta-base", "model_type": "roberta", "num_train_epochs": 4, "per_gpu_train_batch_size": 16, "save_steps": 5000, "seed": 42, "train_batch_size": 16, "version_2_with_negative": true, "warmup_steps": 0, "weight_decay": 0 } ``` ## Environment Setup ```json { "transformers": "2.5.1", "pytorch": "1.4.0=py3.6_cuda10.1.243_cudnn7.6.3_0", "python": "3.6.5=hc3d631a_2", "os": "Linux 4.15.0-1060-aws #62-Ubuntu SMP Tue Feb 11 21:23:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux", "gpu": "Tesla V100-SXM2-16GB" } ``` ## How to Cite ```BibTeX @misc{elgeish2020gestalt, title={Gestalt: a Stacking Ensemble for SQuAD2.0}, author={Mohamed El-Geish}, journal={arXiv e-prints}, archivePrefix={arXiv}, eprint={2004.07067}, year={2020}, } ``` ## Related Models * [elgeish/cs224n-squad2.0-albert-base-v2](https://huggingface.co/elgeish/cs224n-squad2.0-albert-base-v2) * [elgeish/cs224n-squad2.0-albert-large-v2](https://huggingface.co/elgeish/cs224n-squad2.0-albert-large-v2) * [elgeish/cs224n-squad2.0-albert-xxlarge-v1](https://huggingface.co/elgeish/cs224n-squad2.0-albert-xxlarge-v1) * [elgeish/cs224n-squad2.0-distilbert-base-uncased](https://huggingface.co/elgeish/cs224n-squad2.0-distilbert-base-uncased)
deepampatel/roberta-mlm-marathi
deepampatel
2021-05-20T15:58:32Z
13
2
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "mr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: "mr" --- # Welcome to Roberta-Marathi-MLM ## Model Description > This is a small language model for [Marathi](https://en.wikipedia.org/wiki/Marathi) language with 1M data samples taken from [OSCAR page](https://oscar-public.huma-num.fr/shuffled/mr_dedup.txt.gz) ## Training params - **Dataset** - 1M data samples are used to train this model from OSCAR page(https://oscar-corpus.com/) eventhough data set is of 2.7 GB due to resource constraint to train I have picked only 1M data from the total 2.7GB data set. If you are interested in collaboration and have computational resources to train on you are most welcome to do so. - **Preprocessing** - ByteLevelBPETokenizer is used to tokenize the sentences at character level and vocabulary size is set to 52k as per standard values given by 🤗 <!-- - **Hyperparameters** - __ByteLevelBPETokenizer__ : vocabulary size = 52_000 and min_frequency = 2 __Trainer__ : num_train_epochs=12 - trained for 12 epochs per_gpu_train_batch_size=64 - batch size for the datasamples is 64 save_steps=10_000 - save model for every 10k steps save_total_limit=2 - save limit is set for 2 --> **Intended uses & limitations** this is for anyone who wants to make use of marathi language models for various tasks like language generation, translation and many more use cases. **Whatever else is helpful!** If you are intersted in collaboration feel free to reach me [Deepam](mailto:[email protected])
dbernsohn/roberta-python
dbernsohn
2021-05-20T15:57:13Z
4
3
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "arxiv:1907.11692", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# roberta-python --- language: python datasets: - code_search_net --- This is a [roberta](https://arxiv.org/pdf/1907.11692.pdf) pre-trained version on the [CodeSearchNet dataset](https://github.com/github/CodeSearchNet) for **Python** Mask Language Model mission. To load the model: (necessary packages: !pip install transformers sentencepiece) ```python from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline tokenizer = AutoTokenizer.from_pretrained("dbernsohn/roberta-python") model = AutoModelWithLMHead.from_pretrained("dbernsohn/roberta-python") fill_mask = pipeline( "fill-mask", model=model, tokenizer=tokenizer ) ``` You can then use this model to fill masked words in a Python code. ```python code = """ new_dict = {} for k, v in my_dict.<mask>(): new_dict[k] = v**2 """.lstrip() pred = {x["token_str"].replace("Ġ", ""): x["score"] for x in fill_mask(code)} sorted(pred.items(), key=lambda kv: kv[1], reverse=True) # [('items', 0.7376779913902283), # ('keys', 0.16238391399383545), # ('values', 0.03965481370687485), # ('iteritems', 0.03346433863043785), # ('splitlines', 0.0032723243348300457)] ``` The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/CodeMLM) > Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
dbernsohn/roberta-php
dbernsohn
2021-05-20T15:56:10Z
5
2
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "arxiv:1907.11692", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# roberta-php --- language: php datasets: - code_search_net --- This is a [roberta](https://arxiv.org/pdf/1907.11692.pdf) pre-trained version on the [CodeSearchNet dataset](https://github.com/github/CodeSearchNet) for **php** Mask Language Model mission. To load the model: (necessary packages: !pip install transformers sentencepiece) ```python from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline tokenizer = AutoTokenizer.from_pretrained("dbernsohn/roberta-php") model = AutoModelWithLMHead.from_pretrained("dbernsohn/roberta-php") fill_mask = pipeline( "fill-mask", model=model, tokenizer=tokenizer ) ``` You can then use this model to fill masked words in a Java code. ```python code = """ $people = array( array('name' => 'Kalle', 'salt' => 856412), array('name' => 'Pierre', 'salt' => 215863) ); for($i = 0; $i < count($<mask>); ++$i) { $people[$i]['salt'] = mt_rand(000000, 999999); } """.lstrip() pred = {x["token_str"].replace("Ġ", ""): x["score"] for x in fill_mask(code)} sorted(pred.items(), key=lambda kv: kv[1], reverse=True) # [('people', 0.785636842250824), # ('parts', 0.006270722020417452), # ('id', 0.0035842324141412973), # ('data', 0.0025512021966278553), # ('config', 0.002258970635011792)] ``` The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/CodeMLM) > Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
dbernsohn/roberta-java
dbernsohn
2021-05-20T15:54:29Z
13
2
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "arxiv:1907.11692", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# roberta-java --- language: Java datasets: - code_search_net --- This is a [roberta](https://arxiv.org/pdf/1907.11692.pdf) pre-trained version on the [CodeSearchNet dataset](https://github.com/github/CodeSearchNet) for **Java** Mask Language Model mission. To load the model: (necessary packages: !pip install transformers sentencepiece) ```python from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline tokenizer = AutoTokenizer.from_pretrained("dbernsohn/roberta-java") model = AutoModelWithLMHead.from_pretrained("dbernsohn/roberta-java") fill_mask = pipeline( "fill-mask", model=model, tokenizer=tokenizer ) ``` You can then use this model to fill masked words in a Java code. ```python code = """ String[] cars = {"Volvo", "BMW", "Ford", "Mazda"}; for (String i : cars) { System.out.<mask>(i); } """.lstrip() pred = {x["token_str"].replace("Ġ", ""): x["score"] for x in fill_mask(code)} sorted(pred.items(), key=lambda kv: kv[1], reverse=True) # [('println', 0.32571351528167725), # ('get', 0.2897663116455078), # ('remove', 0.0637081190943718), # ('exit', 0.058875661343336105), # ('print', 0.034190207719802856)] ``` The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/CodeMLM) > Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
clue/roberta_chinese_large
clue
2021-05-20T15:28:53Z
12
2
transformers
[ "transformers", "pytorch", "jax", "roberta", "zh", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: zh --- ## roberta_chinese_large ### Overview **Language model:** roberta-large **Model size:** 1.2G **Language:** Chinese **Training data:** [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020) **Eval data:** [CLUE dataset](https://github.com/CLUEbenchmark/CLUE) ### Results For results on downstream tasks like text classification, please refer to [this repository](https://github.com/CLUEbenchmark/CLUE). ### Usage **NOTE:** You have to call **BertTokenizer** instead of RobertaTokenizer !!! ``` import torch from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained("clue/roberta_chinese_large") roberta = BertModel.from_pretrained("clue/roberta_chinese_large") ``` ### About CLUE benchmark Organization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard. Github: https://github.com/CLUEbenchmark Website: https://www.cluebenchmarks.com/
clue/roberta_chinese_base
clue
2021-05-20T15:23:58Z
317
7
transformers
[ "transformers", "pytorch", "jax", "roberta", "zh", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: zh --- ## roberta_chinese_base ### Overview **Language model:** roberta-base **Model size:** 392M **Language:** Chinese **Training data:** [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020) **Eval data:** [CLUE dataset](https://github.com/CLUEbenchmark/CLUE) ### Results For results on downstream tasks like text classification, please refer to [this repository](https://github.com/CLUEbenchmark/CLUE). ### Usage **NOTE:** You have to call **BertTokenizer** instead of RobertaTokenizer !!! ``` import torch from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained("clue/roberta_chinese_base") roberta = BertModel.from_pretrained("clue/roberta_chinese_base") ``` ### About CLUE benchmark Organization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard. Github: https://github.com/CLUEbenchmark Website: https://www.cluebenchmarks.com/
aychang/roberta-base-imdb
aychang
2021-05-20T14:25:56Z
1,446
5
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "en", "dataset:imdb", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - en thumbnail: tags: - text-classification license: mit datasets: - imdb metrics: --- # IMDB Sentiment Task: roberta-base ## Model description A simple base roBERTa model trained on the "imdb" dataset. ## Intended uses & limitations #### How to use ##### Transformers ```python # Load model and tokenizer from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) # Use pipeline from transformers import pipeline model_name = "aychang/roberta-base-imdb" nlp = pipeline("sentiment-analysis", model=model_name, tokenizer=model_name) results = nlp(["I didn't really like it because it was so terrible.", "I love how easy it is to watch and get good results."]) ``` ##### AdaptNLP ```python from adaptnlp import EasySequenceClassifier model_name = "aychang/roberta-base-imdb" texts = ["I didn't really like it because it was so terrible.", "I love how easy it is to watch and get good results."] classifer = EasySequenceClassifier results = classifier.tag_text(text=texts, model_name_or_path=model_name, mini_batch_size=2) ``` #### Limitations and bias This is minimal language model trained on a benchmark dataset. ## Training data IMDB https://huggingface.co/datasets/imdb ## Training procedure #### Hardware One V100 #### Hyperparameters and Training Args ```python from transformers import TrainingArguments training_args = TrainingArguments( output_dir='./models', overwrite_output_dir=False, num_train_epochs=2, per_device_train_batch_size=8, per_device_eval_batch_size=8, warmup_steps=500, weight_decay=0.01, evaluation_strategy="steps", logging_dir='./logs', fp16=False, eval_steps=800, save_steps=300000 ) ``` ## Eval results ``` {'epoch': 2.0, 'eval_accuracy': 0.94668, 'eval_f1': array([0.94603457, 0.94731017]), 'eval_loss': 0.2578844428062439, 'eval_precision': array([0.95762642, 0.93624502]), 'eval_recall': array([0.93472, 0.95864]), 'eval_runtime': 244.7522, 'eval_samples_per_second': 102.144} ```
pchanda/pretrained-smiles-pubchem10m
pchanda
2021-05-20T13:01:15Z
729
1
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
model pretrained on 10m smiles from pubchem.
Naveen-k/KanBERTo
Naveen-k
2021-05-20T12:16:02Z
13
1
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "kn", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: kn --- # Welcome to KanBERTo (ಕನ್ಬರ್ಟೋ) ## Model Description > This is a small language model for [Kannada](https://en.wikipedia.org/wiki/Kannada) language with 1M data samples taken from [OSCAR page](https://traces1.inria.fr/oscar/files/compressed-orig/kn.txt.gz) ## Training params - **Dataset** - 1M data samples are used to train this model from OSCAR page(https://traces1.inria.fr/oscar/) eventhough data set is of 1.7 GB due to resource constraint to train I have picked only 1M data from the total 1.7GB data set. If you are interested in collaboration and have computational resources to train on you are most welcome to do so. - **Preprocessing** - ByteLevelBPETokenizer is used to tokenize the sentences at character level and vocabulary size is set to 52k as per standard values given by 🤗 - **Hyperparameters** - __ByteLevelBPETokenizer__ : vocabulary size = 52_000 and min_frequency = 2 __Trainer__ : num_train_epochs=12 - trained for 12 epochs per_gpu_train_batch_size=64 - batch size for the datasamples is 64 save_steps=10_000 - save model for every 10k steps save_total_limit=2 - save limit is set for 2 **Intended uses & limitations** this is for anyone who wants to make use of kannada language models for various tasks like language generation, translation and many more use cases. **Whatever else is helpful!** If you are intersted in collaboration feel free to reach me [Naveen](mailto:[email protected])
NTUYG/DeepSCC-RoBERTa
NTUYG
2021-05-20T12:15:05Z
22
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
## How to use ```python from simpletransformers.classification import ClassificationModel, ClassificationArgs name_file = ['bash', 'c', 'c#', 'c++','css', 'haskell', 'java', 'javascript', 'lua', 'objective-c', 'perl', 'php', 'python','r','ruby', 'scala', 'sql', 'swift', 'vb.net'] deep_scc_model_args = ClassificationArgs(num_train_epochs=10,max_seq_length=300,use_multiprocessing=False) deep_scc_model = ClassificationModel("roberta", "NTUYG/DeepSCC-RoBERTa", num_labels=19, args=deep_scc_model_args,use_cuda=True) code = ''' public static double getSimilarity(String phrase1, String phrase2) { return (getSC(phrase1, phrase2) + getSC(phrase2, phrase1)) / 2.0; }''' code = code.replace('\n',' ').replace('\r',' ') predictions, raw_outputs = model.predict([code]) predict = name_file[predictions[0]] print(predict) ```
MoseliMotsoehli/zuBERTa
MoseliMotsoehli
2021-05-20T12:14:07Z
6
1
transformers
[ "transformers", "pytorch", "tf", "jax", "roberta", "fill-mask", "zu", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: zu --- # zuBERTa zuBERTa is a RoBERTa style transformer language model trained on zulu text. ## Intended uses & limitations The model can be used for getting embeddings to use on a down-stream task such as question answering. #### How to use ```python >>> from transformers import pipeline >>> from transformers import AutoTokenizer, AutoModelWithLMHead >>> tokenizer = AutoTokenizer.from_pretrained("MoseliMotsoehli/zuBERTa") >>> model = AutoModelWithLMHead.from_pretrained("MoseliMotsoehli/zuBERTa") >>> unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer) >>> unmasker("Abafika eNkandla bafika sebeholwa <mask> uMpongo kaZingelwayo.") [ { "sequence": "<s>Abafika eNkandla bafika sebeholwa khona uMpongo kaZingelwayo.</s>", "score": 0.050459690392017365, "token": 555, "token_str": "Ġkhona" }, { "sequence": "<s>Abafika eNkandla bafika sebeholwa inkosi uMpongo kaZingelwayo.</s>", "score": 0.03668094798922539, "token": 2321, "token_str": "Ġinkosi" }, { "sequence": "<s>Abafika eNkandla bafika sebeholwa ubukhosi uMpongo kaZingelwayo.</s>", "score": 0.028774697333574295, "token": 5101, "token_str": "Ġubukhosi" } ] ``` ## Training data 1. 30k sentences of text, came from the [Leipzig Corpora Collection](https://wortschatz.uni-leipzig.de/en/download) of zulu 2018. These were collected from news articles and creative writtings. 2. ~7500 articles of human generated translations were scraped from the zulu [wikipedia](https://zu.wikipedia.org/wiki/Special:AllPages). ### BibTeX entry and citation info ```bibtex @inproceedings{author = {Moseli Motsoehli}, title = {Towards transformation of Southern African language models through transformers.}, year={2020} } ```
LIAMF-USP/roberta-large-finetuned-race
LIAMF-USP
2021-05-20T12:08:36Z
33
11
transformers
[ "transformers", "pytorch", "tf", "jax", "roberta", "multiple-choice", "dataset:race", "license:mit", "endpoints_compatible", "region:us" ]
multiple-choice
2022-03-02T23:29:04Z
--- language: "english" license: "mit" datasets: - race metrics: - accuracy --- # Roberta Large Fine Tuned on RACE ## Model description This model is a fine-tuned model of Roberta-large applied on RACE #### How to use ```python import datasets from transformers import RobertaTokenizer from transformers import RobertaForMultipleChoice tokenizer = RobertaTokenizer.from_pretrained( "LIAMF-USP/roberta-large-finetuned-race") model = RobertaForMultipleChoice.from_pretrained( "LIAMF-USP/roberta-large-finetuned-race") dataset = datasets.load_dataset( "race", "all", split=["train", "validation", "test"], )training_examples = dataset[0] evaluation_examples = dataset[1] test_examples = dataset[2] example=training_examples[0] example_id = example["example_id"] question = example["question"] context = example["article"] options = example["options"] label_example = example["answer"] label_map = {label: i for i, label in enumerate(["A", "B", "C", "D"])} choices_inputs = [] for ending_idx, (_, ending) in enumerate( zip(context, options)): if question.find("_") != -1: # fill in the banks questions question_option = question.replace("_", ending) else: question_option = question + " " + ending inputs = tokenizer( context, question_option, add_special_tokens=True, max_length=MAX_SEQ_LENGTH, padding="max_length", truncation=True, return_overflowing_tokens=False, ) label = label_map[label_example] input_ids = [x["input_ids"] for x in choices_inputs] attention_mask = ( [x["attention_mask"] for x in choices_inputs] # as the senteces follow the same structure, #just one of them is necessary to check if "attention_mask" in choices_inputs[0] else None ) example_encoded = { "example_id": example_id, "input_ids": input_ids, "attention_mask": attention_mask, "label": label, } output = model(**example_encoded) ``` ## Training data The initial model was [roberta large model](https://huggingface.co/roberta-large) which was then fine-tuned on [RACE dataset](https://www.cs.cmu.edu/~glai1/data/race/) ## Training procedure It was necessary to preprocess the data with a method that is exemplified for a single instance in the _How to use_ section. The used hyperparameters were the following: | Hyperparameter | Value | |:----:|:----:| | adam_beta1 | 0.9 | | adam_beta2 | 0.98 | | adam_epsilon | 1.000e-8 | | eval_batch_size | 32 | | train_batch_size | 1 | | fp16 | True | | gradient_accumulation_steps | 16 | | learning_rate | 0.00001 | | warmup_steps | 1000 | | max_length | 512 | | epochs | 4 | ## Eval results: | Dataset Acc | Eval | All Test |High School Test |Middle School Test | |:----:|:----:|:----:|:----:|:----:| | | 85.2 | 84.9|83.5|88.0| **The model was trained with a Tesla V100-PCIE-16GB**
LIAMF-USP/aristo-roberta
LIAMF-USP
2021-05-20T12:04:27Z
11
1
transformers
[ "transformers", "pytorch", "tf", "jax", "roberta", "multiple-choice", "dataset:race", "dataset:ai2_arc", "dataset:openbookqa", "license:mit", "endpoints_compatible", "region:us" ]
multiple-choice
2022-03-02T23:29:04Z
--- language: "english" license: "mit" datasets: - race - ai2_arc - openbookqa metrics: - accuracy --- # Roberta Large Fine Tuned on RACE ## Model description This model follows the implementation by Allen AI team about [Aristo Roberta V7 Model](https://leaderboard.allenai.org/arc/submission/blcotvl7rrltlue6bsv0) given in [ARC Challenge](https://leaderboard.allenai.org/arc/submissions/public) #### How to use ```python import datasets from transformers import RobertaTokenizer from transformers import RobertaForMultipleChoice tokenizer = RobertaTokenizer.from_pretrained( "LIAMF-USP/aristo-roberta") model = RobertaForMultipleChoice.from_pretrained( "LIAMF-USP/aristo-roberta") dataset = datasets.load_dataset( "arc",, split=["train", "validation", "test"], ) training_examples = dataset[0] evaluation_examples = dataset[1] test_examples = dataset[2] example=training_examples[0] example_id = example["example_id"] question = example["question"] label_example = example["answer"] options = example["options"] if label_example in ["A", "B", "C", "D", "E"]: label_map = {label: i for i, label in enumerate( ["A", "B", "C", "D", "E"])} elif label_example in ["1", "2", "3", "4", "5"]: label_map = {label: i for i, label in enumerate( ["1", "2", "3", "4", "5"])} else: print(f"{label_example} not found") while len(options) < 5: empty_option = {} empty_option['option_context'] = '' empty_option['option_text'] = '' options.append(empty_option) choices_inputs = [] for ending_idx, option in enumerate(options): ending = option["option_text"] context = option["option_context"] if question.find("_") != -1: # fill in the banks questions question_option = question.replace("_", ending) else: question_option = question + " " + ending inputs = tokenizer( context, question_option, add_special_tokens=True, max_length=MAX_SEQ_LENGTH, padding="max_length", truncation=True, return_overflowing_tokens=False, ) if "num_truncated_tokens" in inputs and inputs["num_truncated_tokens"] > 0: logging.warning(f"Question: {example_id} with option {ending_idx} was truncated") choices_inputs.append(inputs) label = label_map[label_example] input_ids = [x["input_ids"] for x in choices_inputs] attention_mask = ( [x["attention_mask"] for x in choices_inputs] # as the senteces follow the same structure, just one of them is # necessary to check if "attention_mask" in choices_inputs[0] else None ) example_encoded = { "example_id": example_id, "input_ids": input_ids, "attention_mask": attention_mask, "token_type_ids": token_type_ids, "label": label } output = model(**example_encoded) ``` ## Training data the Training data was the same as proposed [here](https://leaderboard.allenai.org/arc/submission/blcotvl7rrltlue6bsv0) The only diferrence was the hypeparameters of RACE fine tuned model, which were reported [here](https://huggingface.co/LIAMF-USP/roberta-large-finetuned-race#eval-results) ## Training procedure It was necessary to preprocess the data with a method that is exemplified for a single instance in the _How to use_ section. The used hyperparameters were the following: | Hyperparameter | Value | |:----:|:----:| | adam_beta1 | 0.9 | | adam_beta2 | 0.98 | | adam_epsilon | 1.000e-8 | | eval_batch_size | 16 | | train_batch_size | 4 | | fp16 | True | | gradient_accumulation_steps | 4 | | learning_rate | 0.00001 | | warmup_steps | 0.06 | | max_length | 256 | | epochs | 4 | The other parameters were the default ones from [Trainer](https://huggingface.co/transformers/main_classes/trainer.html) and [Trainer Arguments](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments) ## Eval results: | Dataset Acc | Challenge Test | |:----:|:----:| | | 65.358 | **The model was trained with a TITAN RTX**
HooshvareLab/roberta-fa-zwnj-base-ner
HooshvareLab
2021-05-20T11:55:34Z
113
1
transformers
[ "transformers", "pytorch", "tf", "jax", "roberta", "token-classification", "fa", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- language: fa --- # RobertaNER This model fine-tuned for the Named Entity Recognition (NER) task on a mixed NER dataset collected from [ARMAN](https://github.com/HaniehP/PersianNER), [PEYMA](http://nsurl.org/2019-2/tasks/task-7-named-entity-recognition-ner-for-farsi/), and [WikiANN](https://elisa-ie.github.io/wikiann/) that covered ten types of entities: - Date (DAT) - Event (EVE) - Facility (FAC) - Location (LOC) - Money (MON) - Organization (ORG) - Percent (PCT) - Person (PER) - Product (PRO) - Time (TIM) ## Dataset Information | | Records | B-DAT | B-EVE | B-FAC | B-LOC | B-MON | B-ORG | B-PCT | B-PER | B-PRO | B-TIM | I-DAT | I-EVE | I-FAC | I-LOC | I-MON | I-ORG | I-PCT | I-PER | I-PRO | I-TIM | |:------|----------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:| | Train | 29133 | 1423 | 1487 | 1400 | 13919 | 417 | 15926 | 355 | 12347 | 1855 | 150 | 1947 | 5018 | 2421 | 4118 | 1059 | 19579 | 573 | 7699 | 1914 | 332 | | Valid | 5142 | 267 | 253 | 250 | 2362 | 100 | 2651 | 64 | 2173 | 317 | 19 | 373 | 799 | 387 | 717 | 270 | 3260 | 101 | 1382 | 303 | 35 | | Test | 6049 | 407 | 256 | 248 | 2886 | 98 | 3216 | 94 | 2646 | 318 | 43 | 568 | 888 | 408 | 858 | 263 | 3967 | 141 | 1707 | 296 | 78 | ## Evaluation The following tables summarize the scores obtained by model overall and per each class. **Overall** | Model | accuracy | precision | recall | f1 | |:----------:|:--------:|:---------:|:--------:|:--------:| | Roberta | 0.994849 | 0.949816 | 0.960235 | 0.954997 | **Per entities** | | number | precision | recall | f1 | |:---: |:------: |:---------: |:--------: |:--------: | | DAT | 407 | 0.844869 | 0.869779 | 0.857143 | | EVE | 256 | 0.948148 | 1.000000 | 0.973384 | | FAC | 248 | 0.957529 | 1.000000 | 0.978304 | | LOC | 2884 | 0.965422 | 0.968100 | 0.966759 | | MON | 98 | 0.937500 | 0.918367 | 0.927835 | | ORG | 3216 | 0.943662 | 0.958333 | 0.950941 | | PCT | 94 | 1.000000 | 0.968085 | 0.983784 | | PER | 2646 | 0.957030 | 0.959562 | 0.958294 | | PRO | 318 | 0.963636 | 1.000000 | 0.981481 | | TIM | 43 | 0.739130 | 0.790698 | 0.764045 | ## How To Use You use this model with Transformers pipeline for NER. ### Installing requirements ```bash pip install transformers ``` ### How to predict using pipeline ```python from transformers import AutoTokenizer from transformers import AutoModelForTokenClassification # for pytorch from transformers import TFAutoModelForTokenClassification # for tensorflow from transformers import pipeline model_name_or_path = "HooshvareLab/roberta-fa-zwnj-base-ner" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForTokenClassification.from_pretrained(model_name_or_path) # Pytorch # model = TFAutoModelForTokenClassification.from_pretrained(model_name_or_path) # Tensorflow nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "در سال ۲۰۱۳ درگذشت و آندرتیکر و کین برای او مراسم یادبود گرفتند." ner_results = nlp(example) print(ner_results) ``` ## Questions? Post a Github issue on the [ParsNER Issues](https://github.com/hooshvare/parsner/issues) repo.
ykacer/bert-base-cased-imdb-sequence-classification
ykacer
2021-05-20T09:31:37Z
6
0
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "sequence", "classification", "en", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - en thumbnail: https://raw.githubusercontent.com/JetRunner/BERT-of-Theseus/master/bert-of-theseus.png tags: - sequence - classification license: apache-2.0 datasets: - imdb metrics: - accuracy ---
twmkn9/bert-base-uncased-squad2
twmkn9
2021-05-20T08:21:23Z
245
2
transformers
[ "transformers", "pytorch", "jax", "bert", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
This model is [BERT base uncased](https://huggingface.co/bert-base-uncased) trained on SQuAD v2 as: ``` export SQUAD_DIR=../../squad2 python3 run_squad.py --model_type bert --model_name_or_path bert-base-uncased --do_train --do_eval --overwrite_cache --do_lower_case --version_2_with_negative --save_steps 100000 --train_file $SQUAD_DIR/train-v2.0.json --predict_file $SQUAD_DIR/dev-v2.0.json --per_gpu_train_batch_size 8 --num_train_epochs 3 --learning_rate 3e-5 --max_seq_length 384 --doc_stride 128 --output_dir ./tmp/bert_fine_tuned/ ``` Performance on a dev subset is close to the original paper: ``` Results: { 'exact': 72.35932872655479, 'f1': 75.75355132564763, 'total': 6078, 'HasAns_exact': 74.29553264604812, 'HasAns_f1': 81.38490892002987, 'HasAns_total': 2910, 'NoAns_exact': 70.58080808080808, 'NoAns_f1': 70.58080808080808, 'NoAns_total': 3168, 'best_exact': 72.35932872655479, 'best_exact_thresh': 0.0, 'best_f1': 75.75355132564766, 'best_f1_thresh': 0.0 } ``` We are hopeful this might save you time, energy, and compute. Cheers!
tugstugi/bert-large-mongolian-cased
tugstugi
2021-05-20T08:16:24Z
28
0
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "mongolian", "cased", "mn", "arxiv:1810.04805", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: "mn" tags: - bert - mongolian - cased --- # BERT-LARGE-MONGOLIAN-CASED [Link to Official Mongolian-BERT repo](https://github.com/tugstugi/mongolian-bert) ## Model description This repository contains pre-trained Mongolian [BERT](https://arxiv.org/abs/1810.04805) models trained by [tugstugi](https://github.com/tugstugi), [enod](https://github.com/enod) and [sharavsambuu](https://github.com/sharavsambuu). Special thanks to [nabar](https://github.com/nabar) who provided 5x TPUs. This repository is based on the following open source projects: [google-research/bert](https://github.com/google-research/bert/), [huggingface/pytorch-pretrained-BERT](https://github.com/huggingface/pytorch-pretrained-BERT) and [yoheikikuta/bert-japanese](https://github.com/yoheikikuta/bert-japanese). #### How to use ```python from transformers import pipeline, AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained('tugstugi/bert-large-mongolian-cased', use_fast=False) model = AutoModelForMaskedLM.from_pretrained('tugstugi/bert-large-mongolian-cased') ## declare task ## pipe = pipeline(task="fill-mask", model=model, tokenizer=tokenizer) ## example ## input_ = 'Монгол улсын [MASK] Улаанбаатар хотоос ярьж байна.' output_ = pipe(input_) for i in range(len(output_)): print(output_[i]) ## output ## # {'sequence': 'Монгол улсын нийслэл Улаанбаатар хотоос ярьж байна.', 'score': 0.9779232740402222, 'token': 1176, 'token_str': 'нийслэл'} # {'sequence': 'Монгол улсын Нийслэл Улаанбаатар хотоос ярьж байна.', 'score': 0.015034765936434269, 'token': 4059, 'token_str': 'Нийслэл'} # {'sequence': 'Монгол улсын Ерөнхийлөгч Улаанбаатар хотоос ярьж байна.', 'score': 0.0021413620561361313, 'token': 325, 'token_str': 'Ерөнхийлөгч'} # {'sequence': 'Монгол улсын ерөнхийлөгч Улаанбаатар хотоос ярьж байна.', 'score': 0.0008035294013097882, 'token': 1215, 'token_str': 'ерөнхийлөгч'} # {'sequence': 'Монгол улсын нийслэлийн Улаанбаатар хотоос ярьж байна.', 'score': 0.0006434018723666668, 'token': 356, 'token_str': 'нийслэлийн'} ``` ## Training data Mongolian Wikipedia and the 700 million word Mongolian news data set [[Pretraining Procedure](https://github.com/tugstugi/mongolian-bert#pre-training)] ### BibTeX entry and citation info ```bibtex @misc{mongolian-bert, author = {Tuguldur, Erdene-Ochir and Gunchinish, Sharavsambuu and Bataa, Enkhbold}, title = {BERT Pretrained Models on Mongolian Datasets}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/tugstugi/mongolian-bert/}} } ```
tugstugi/bert-base-mongolian-uncased
tugstugi
2021-05-20T08:13:09Z
30
2
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "mongolian", "uncased", "mn", "arxiv:1810.04805", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: "mn" tags: - bert - mongolian - uncased --- # BERT-BASE-MONGOLIAN-UNCASED [Link to Official Mongolian-BERT repo](https://github.com/tugstugi/mongolian-bert) ## Model description This repository contains pre-trained Mongolian [BERT](https://arxiv.org/abs/1810.04805) models trained by [tugstugi](https://github.com/tugstugi), [enod](https://github.com/enod) and [sharavsambuu](https://github.com/sharavsambuu). Special thanks to [nabar](https://github.com/nabar) who provided 5x TPUs. This repository is based on the following open source projects: [google-research/bert](https://github.com/google-research/bert/), [huggingface/pytorch-pretrained-BERT](https://github.com/huggingface/pytorch-pretrained-BERT) and [yoheikikuta/bert-japanese](https://github.com/yoheikikuta/bert-japanese). #### How to use ```python from transformers import pipeline, AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained('tugstugi/bert-base-mongolian-uncased', use_fast=False) model = AutoModelForMaskedLM.from_pretrained('tugstugi/bert-base-mongolian-uncased') ## declare task ## pipe = pipeline(task="fill-mask", model=model, tokenizer=tokenizer) ## example ## input_ = 'Миний [MASK] хоол идэх нь тун чухал.' output_ = pipe(input_) for i in range(len(output_)): print(output_[i]) ## output ## #{'sequence': 'миний хувьд хоол идэх нь тун чухал.', 'score': 0.7889143824577332, 'token': 126, 'token_str': 'хувьд'} #{'sequence': 'миний бодлоор хоол идэх нь тун чухал.', 'score': 0.18616807460784912, 'token': 6106, 'token_str': 'бодлоор'} #{'sequence': 'миний зүгээс хоол идэх нь тун чухал.', 'score': 0.004825591575354338, 'token': 761, 'token_str': 'зүгээс'} #{'sequence': 'миний биед хоол идэх нь тун чухал.', 'score': 0.0015743684489279985, 'token': 3010, 'token_str': 'биед'} #{'sequence': 'миний тухайд хоол идэх нь тун чухал.', 'score': 0.0014919431414455175, 'token': 1712, 'token_str': 'тухайд'} ``` ## Training data Mongolian Wikipedia and the 700 million word Mongolian news data set [[Pretraining Procedure](https://github.com/tugstugi/mongolian-bert#pre-training)] ### BibTeX entry and citation info ```bibtex @misc{mongolian-bert, author = {Tuguldur, Erdene-Ochir and Gunchinish, Sharavsambuu and Bataa, Enkhbold}, title = {BERT Pretrained Models on Mongolian Datasets}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/tugstugi/mongolian-bert/}} } ```
trueto/medbert-base-wwm-chinese
trueto
2021-05-20T08:09:44Z
8
9
transformers
[ "transformers", "pytorch", "jax", "bert", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# [medbert](https://github.com/trueto/medbert) 本项目开源硕士毕业论文“BERT模型在中文临床自然语言处理中的应用探索与研究”相关模型 ## 评估基准 构建了中文电子病历命名实体识别数据集(CEMRNER)、中文医学文本命名实体识别数据集(CMTNER)、 中文医学问句-问句识别数据集(CMedQQ)和中文临床文本分类数据集(CCTC)。 | **数据集** | **训练集** | **验证集** | **测试集** | **任务类型** | **语料来源** | | ---- | ---- | ---- |---- |---- |:----:| | CEMRNER | 965 | 138 | 276 | 命名实体识别 | 医渡云 | | CMTNER | 14000 | 2000 | 4000 | 命名实体识别 | CHIP2020 | | CMedQQ | 14000 | 2000 | 4000 | 句对识别 | 平安医疗 | | CCTC | 26837 | 3834 | 7669 | 句子分类 | CHIP2019 | ## 开源模型 在6.5亿字符中文临床自然语言文本语料上基于BERT模型和Albert模型预训练获得了MedBERT和MedAlbert模型。 ## 性能表现 在同等实验环境,相同训练参数和脚本下,各模型的性能表现 | **模型** | **CEMRNER** | **CMTNER** | **CMedQQ** | **CCTC** | | :---- | :----: | :----: | :----: | :----: | | [BERT](https://huggingface.co/bert-base-chinese) | 81.17% | 65.67% | 87.77% | 81.62% | | [MC-BERT](https://github.com/alibaba-research/ChineseBLUE) | 80.93% | 66.15% | 89.04% | 80.65% | | [PCL-BERT](https://code.ihub.org.cn/projects/1775) | 81.58% | 67.02% | 88.81% | 80.27% | | MedBERT | 82.29% | 66.49% | 88.32% | **81.77%** | |MedBERT-wwm| **82.60%** | 67.11% | 88.02% | 81.72% | |MedBERT-kd | 82.58% | **67.27%** | **89.34%** | 80.73% | |- | - | - | - | - | | [Albert](https://huggingface.co/voidful/albert_chinese_base) | 79.98% | 62.42% | 86.81% | 79.83% | | MedAlbert | 81.03% | 63.81% | 87.56% | 80.05% | |MedAlbert-wwm| **81.28%** | **64.12%** | **87.71%** | **80.46%** | ## 引用格式 ``` 杨飞洪,王序文,李姣.BERT模型在中文临床自然语言处理中的应用探索与研究[EB/OL].https://github.com/trueto/medbert, 2021-03. ```
trueto/medbert-base-chinese
trueto
2021-05-20T08:08:47Z
276
13
transformers
[ "transformers", "pytorch", "jax", "bert", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# [medbert](https://github.com/trueto/medbert) 本项目开源硕士毕业论文“BERT模型在中文临床自然语言处理中的应用探索与研究”相关模型 ## 评估基准 构建了中文电子病历命名实体识别数据集(CEMRNER)、中文医学文本命名实体识别数据集(CMTNER)、 中文医学问句-问句识别数据集(CMedQQ)和中文临床文本分类数据集(CCTC)。 | **数据集** | **训练集** | **验证集** | **测试集** | **任务类型** | **语料来源** | | ---- | ---- | ---- |---- |---- |:----:| | CEMRNER | 965 | 138 | 276 | 命名实体识别 | 医渡云 | | CMTNER | 14000 | 2000 | 4000 | 命名实体识别 | CHIP2020 | | CMedQQ | 14000 | 2000 | 4000 | 句对识别 | 平安医疗 | | CCTC | 26837 | 3834 | 7669 | 句子分类 | CHIP2019 | ## 开源模型 在6.5亿字符中文临床自然语言文本语料上基于BERT模型和Albert模型预训练获得了MedBERT和MedAlbert模型。 ## 性能表现 在同等实验环境,相同训练参数和脚本下,各模型的性能表现 | **模型** | **CEMRNER** | **CMTNER** | **CMedQQ** | **CCTC** | | :---- | :----: | :----: | :----: | :----: | | [BERT](https://huggingface.co/bert-base-chinese) | 81.17% | 65.67% | 87.77% | 81.62% | | [MC-BERT](https://github.com/alibaba-research/ChineseBLUE) | 80.93% | 66.15% | 89.04% | 80.65% | | [PCL-BERT](https://code.ihub.org.cn/projects/1775) | 81.58% | 67.02% | 88.81% | 80.27% | | MedBERT | 82.29% | 66.49% | 88.32% | **81.77%** | |MedBERT-wwm| **82.60%** | 67.11% | 88.02% | 81.72% | |MedBERT-kd | 82.58% | **67.27%** | **89.34%** | 80.73% | |- | - | - | - | - | | [Albert](https://huggingface.co/voidful/albert_chinese_base) | 79.98% | 62.42% | 86.81% | 79.83% | | MedAlbert | 81.03% | 63.81% | 87.56% | 80.05% | |MedAlbert-wwm| **81.28%** | **64.12%** | **87.71%** | **80.46%** | ## 引用格式 ``` 杨飞洪,王序文,李姣.BERT模型在中文临床自然语言处理中的应用探索与研究[EB/OL].https://github.com/trueto/medbert, 2021-03. ```
trtd56/autonlp-wrime_joy_only-117396
trtd56
2021-05-20T08:07:48Z
4
1
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "autonlp", "ja", "dataset:trtd56/autonlp-data-wrime_joy_only", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: ja widget: - text: "I love AutoNLP 🤗" datasets: - trtd56/autonlp-data-wrime_joy_only --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 117396 ## Validation Metrics - Loss: 0.4094310998916626 - Accuracy: 0.8201678240740741 - Precision: 0.6750303520841765 - Recall: 0.7912713472485768 - AUC: 0.8927167943538512 - F1: 0.728543350076436 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/trtd56/autonlp-wrime_joy_only-117396 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("trtd56/autonlp-wrime_joy_only-117396", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("trtd56/autonlp-wrime_joy_only-117396", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
textattack/bert-base-uncased-rotten_tomatoes
textattack
2021-05-20T07:47:13Z
7
0
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
## bert-base-uncased fine-tuned with TextAttack on the rotten_tomatoes dataset This `bert-base-uncased` model was fine-tuned for sequence classificationusing TextAttack and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned for 10 epochs with a batch size of 64, a learning rate of 5e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.875234521575985, as measured by the eval set accuracy, found after 4 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
textattack/bert-base-uncased-imdb
textattack
2021-05-20T07:42:02Z
17,464
6
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
## TextAttack Model Card This `bert-base-uncased` model was fine-tuned for sequence classification using TextAttack and the imdb dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 2e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.89088, as measured by the eval set accuracy, found after 4 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
techthiyanes/Bert_Bahasa_Sentiment
techthiyanes
2021-05-20T07:26:52Z
13
0
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) model = AutoModelForSequenceClassification.from_pretrained('techthiyanes/Bert_Bahasa_Sentiment') inputs = tokenizer("saya tidak", return_tensors="pt") labels = torch.tensor([1]).unsqueeze(0) outputs = model(**inputs, labels=labels) loss = outputs.loss logits = outputs.logits outputs hello
susumu2357/bert-base-swedish-squad2
susumu2357
2021-05-20T07:20:04Z
99
1
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "question-answering", "squad", "sv", "dataset:susumu2357/squad_v2_sv", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: - sv tags: - squad license: apache-2.0 datasets: - susumu2357/squad_v2_sv metrics: - squad_v2 --- # Swedish BERT Fine-tuned on SQuAD v2 This model is a fine-tuning checkpoint of Swedish BERT on SQuAD v2. ## Training data Fine-tuning was done based on the pre-trained model [KB/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased). Training and dev datasets are our [Swedish translation of SQuAD v2](https://github.com/susumu2357/SQuAD_v2_sv). [Here](https://huggingface.co/datasets/susumu2357/squad_v2_sv) is the HuggingFace Datasets. ## Hyperparameters ``` batch_size = 16 n_epochs = 2 max_seq_len = 386 learning_rate = 3e-5 warmup_steps = 2900 # warmup_proportion = 0.2 doc_stride=128 max_query_length=64 ``` ## Eval results ``` 'exact': 66.72642524202223 'f1': 70.11149581003404 'total': 11156 'HasAns_exact': 55.574745730186144 'HasAns_f1': 62.821693965983044 'HasAns_total': 5211 'NoAns_exact': 76.50126156433979 'NoAns_f1': 76.50126156433979 'NoAns_total': 5945 ``` ## Limitations and bias This model may contain biases due to mistranslations of the SQuAD dataset. ## BibTeX entry and citation info ```bibtex @misc{svSQuADbert, author = {Susumu Okazawa}, title = {Swedish BERT Fine-tuned on Swedish SQuAD 2.0}, year = {2021}, howpublished = {\url{https://huggingface.co/susumu2357/bert-base-swedish-squad2}}, } ```
soniakris/Sonia_model
soniakris
2021-05-20T07:09:49Z
4
0
transformers
[ "transformers", "tf", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
Tensor-Flow Model using MASK token
socialmediaie/TRAC2020_HIN_C_bert-base-multilingual-uncased
socialmediaie
2021-05-20T07:01:31Z
6
0
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020 Models and predictions for submission to TRAC - 2020 Second Workshop on Trolling, Aggression and Cyberbullying. Our trained models as well as evaluation metrics during traing are available at: https://databank.illinois.edu/datasets/IDB-8882752# We also make a few of our models available in HuggingFace's models repository at https://huggingface.co/socialmediaie/, these models can be further fine-tuned on your dataset of choice. Our approach is described in our paper titled: > Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020). The source code for training this model and more details can be found on our code repository: https://github.com/socialmediaie/TRAC2020 NOTE: These models are retrained for uploading here after our submission so the evaluation measures may be slightly different from the ones reported in the paper. If you plan to use the dataset please cite the following resources: * Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020). * Mishra, Shubhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. “Trained Models for Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020.” University of Illinois at Urbana-Champaign. https://doi.org/10.13012/B2IDB-8882752_V1. ``` @inproceedings{Mishra2020TRAC, author = {Mishra, Sudhanshu and Prasad, Shivangi and Mishra, Shubhanshu}, booktitle = {Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)}, title = {{Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}}, year = {2020} } @data{illinoisdatabankIDB-8882752, author = {Mishra, Shubhanshu and Prasad, Shivangi and Mishra, Shubhanshu}, doi = {10.13012/B2IDB-8882752_V1}, publisher = {University of Illinois at Urbana-Champaign}, title = {{Trained models for Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}}, url = {https://doi.org/10.13012/B2IDB-8882752{\_}V1}, year = {2020} } ``` ## Usage The models can be used via the following code: ```python from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification import torch from pathlib import Path from scipy.special import softmax import numpy as np import pandas as pd TASK_LABEL_IDS = { "Sub-task A": ["OAG", "NAG", "CAG"], "Sub-task B": ["GEN", "NGEN"], "Sub-task C": ["OAG-GEN", "OAG-NGEN", "NAG-GEN", "NAG-NGEN", "CAG-GEN", "CAG-NGEN"] } model_version="databank" # other option is hugging face library if model_version == "databank": # Make sure you have downloaded the required model file from https://databank.illinois.edu/datasets/IDB-8882752 # Unzip the file at some model_path (we are using: "databank_model") model_path = next(Path("databank_model").glob("./*/output/*/model")) # Assuming you get the following type of structure inside "databank_model" # 'databank_model/ALL/Sub-task C/output/bert-base-multilingual-uncased/model' lang, task, _, base_model, _ = model_path.parts tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForSequenceClassification.from_pretrained(model_path) else: lang, task, base_model = "ALL", "Sub-task C", "bert-base-multilingual-uncased" base_model = f"socialmediaie/TRAC2020_{lang}_{lang.split()[-1]}_{base_model}" tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForSequenceClassification.from_pretrained(base_model) # For doing inference set model in eval mode model.eval() # If you want to further fine-tune the model you can reset the model to model.train() task_labels = TASK_LABEL_IDS[task] sentence = "This is a good cat and this is a bad dog." processed_sentence = f"{tokenizer.cls_token} {sentence}" tokens = tokenizer.tokenize(sentence) indexed_tokens = tokenizer.convert_tokens_to_ids(tokens) tokens_tensor = torch.tensor([indexed_tokens]) with torch.no_grad(): logits, = model(tokens_tensor, labels=None) logits preds = logits.detach().cpu().numpy() preds_probs = softmax(preds, axis=1) preds = np.argmax(preds_probs, axis=1) preds_labels = np.array(task_labels)[preds] print(dict(zip(task_labels, preds_probs[0])), preds_labels) """You should get an output as follows: ({'CAG-GEN': 0.06762535, 'CAG-NGEN': 0.03244293, 'NAG-GEN': 0.6897794, 'NAG-NGEN': 0.15498641, 'OAG-GEN': 0.034373745, 'OAG-NGEN': 0.020792078}, array(['NAG-GEN'], dtype='<U8')) """ ```
socialmediaie/TRAC2020_HIN_A_bert-base-multilingual-uncased
socialmediaie
2021-05-20T06:58:51Z
4
0
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020 Models and predictions for submission to TRAC - 2020 Second Workshop on Trolling, Aggression and Cyberbullying. Our trained models as well as evaluation metrics during traing are available at: https://databank.illinois.edu/datasets/IDB-8882752# We also make a few of our models available in HuggingFace's models repository at https://huggingface.co/socialmediaie/, these models can be further fine-tuned on your dataset of choice. Our approach is described in our paper titled: > Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020). The source code for training this model and more details can be found on our code repository: https://github.com/socialmediaie/TRAC2020 NOTE: These models are retrained for uploading here after our submission so the evaluation measures may be slightly different from the ones reported in the paper. If you plan to use the dataset please cite the following resources: * Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020). * Mishra, Shubhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. “Trained Models for Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020.” University of Illinois at Urbana-Champaign. https://doi.org/10.13012/B2IDB-8882752_V1. ``` @inproceedings{Mishra2020TRAC, author = {Mishra, Sudhanshu and Prasad, Shivangi and Mishra, Shubhanshu}, booktitle = {Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)}, title = {{Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}}, year = {2020} } @data{illinoisdatabankIDB-8882752, author = {Mishra, Shubhanshu and Prasad, Shivangi and Mishra, Shubhanshu}, doi = {10.13012/B2IDB-8882752_V1}, publisher = {University of Illinois at Urbana-Champaign}, title = {{Trained models for Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}}, url = {https://doi.org/10.13012/B2IDB-8882752{\_}V1}, year = {2020} } ``` ## Usage The models can be used via the following code: ```python from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification import torch from pathlib import Path from scipy.special import softmax import numpy as np import pandas as pd TASK_LABEL_IDS = { "Sub-task A": ["OAG", "NAG", "CAG"], "Sub-task B": ["GEN", "NGEN"], "Sub-task C": ["OAG-GEN", "OAG-NGEN", "NAG-GEN", "NAG-NGEN", "CAG-GEN", "CAG-NGEN"] } model_version="databank" # other option is hugging face library if model_version == "databank": # Make sure you have downloaded the required model file from https://databank.illinois.edu/datasets/IDB-8882752 # Unzip the file at some model_path (we are using: "databank_model") model_path = next(Path("databank_model").glob("./*/output/*/model")) # Assuming you get the following type of structure inside "databank_model" # 'databank_model/ALL/Sub-task C/output/bert-base-multilingual-uncased/model' lang, task, _, base_model, _ = model_path.parts tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForSequenceClassification.from_pretrained(model_path) else: lang, task, base_model = "ALL", "Sub-task C", "bert-base-multilingual-uncased" base_model = f"socialmediaie/TRAC2020_{lang}_{lang.split()[-1]}_{base_model}" tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForSequenceClassification.from_pretrained(base_model) # For doing inference set model in eval mode model.eval() # If you want to further fine-tune the model you can reset the model to model.train() task_labels = TASK_LABEL_IDS[task] sentence = "This is a good cat and this is a bad dog." processed_sentence = f"{tokenizer.cls_token} {sentence}" tokens = tokenizer.tokenize(sentence) indexed_tokens = tokenizer.convert_tokens_to_ids(tokens) tokens_tensor = torch.tensor([indexed_tokens]) with torch.no_grad(): logits, = model(tokens_tensor, labels=None) logits preds = logits.detach().cpu().numpy() preds_probs = softmax(preds, axis=1) preds = np.argmax(preds_probs, axis=1) preds_labels = np.array(task_labels)[preds] print(dict(zip(task_labels, preds_probs[0])), preds_labels) """You should get an output as follows: ({'CAG-GEN': 0.06762535, 'CAG-NGEN': 0.03244293, 'NAG-GEN': 0.6897794, 'NAG-NGEN': 0.15498641, 'OAG-GEN': 0.034373745, 'OAG-NGEN': 0.020792078}, array(['NAG-GEN'], dtype='<U8')) """ ```
sismetanin/sbert-ru-sentiment-rureviews
sismetanin
2021-05-20T06:35:54Z
63
3
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "sentiment analysis", "Russian", "ru", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - ru tags: - sentiment analysis - Russian --- ## SBERT-ru-sentiment-RuReviews SBERT-ru-sentiment-RuReviews is a [SBERT-Large](https://huggingface.co/sberbank-ai/sbert_large_nlu_ru) model fine-tuned on [RuReviews dataset](https://github.com/sismetanin/rureviews) of Russian-language reviews from the ”Women’s Clothes and Accessories” product category on the primary e-commerce site in Russia. <table> <thead> <tr> <th rowspan="4">Model</th> <th rowspan="4">Score<br></th> <th rowspan="4">Rank</th> <th colspan="12">Dataset</th> </tr> <tr> <td colspan="6">SentiRuEval-2016<br></td> <td colspan="2" rowspan="2">RuSentiment</td> <td rowspan="2">KRND</td> <td rowspan="2">LINIS Crowd</td> <td rowspan="2">RuTweetCorp</td> <td rowspan="2">RuReviews</td> </tr> <tr> <td colspan="3">TC</td> <td colspan="3">Banks</td> </tr> <tr> <td>micro F1</td> <td>macro F1</td> <td>F1</td> <td>micro F1</td> <td>macro F1</td> <td>F1</td> <td>wighted</td> <td>F1</td> <td>F1</td> <td>F1</td> <td>F1</td> <td>F1</td> </tr> </thead> <tbody> <tr> <td>SOTA</td> <td>n/s</td> <td></td> <td>76.71</td> <td>66.40</td> <td>70.68</td> <td>67.51</td> <td>69.53</td> <td>74.06</td> <td>78.50</td> <td>n/s</td> <td>73.63</td> <td>60.51</td> <td>83.68</td> <td>77.44</td> </tr> <tr> <td>XLM-RoBERTa-Large</td> <td>76.37</td> <td>1</td> <td>82.26</td> <td>76.36</td> <td>79.42</td> <td>76.35</td> <td>76.08</td> <td>80.89</td> <td>78.31</td> <td>75.27</td> <td>75.17</td> <td>60.03</td> <td>88.91</td> <td>78.81</td> </tr> <tr> <td>SBERT-Large</td> <td>75.43</td> <td>2</td> <td>78.40</td> <td>71.36</td> <td>75.14</td> <td>72.39</td> <td>71.87</td> <td>77.72</td> <td>78.58</td> <td>75.85</td> <td>74.20</td> <td>60.64</td> <td>88.66</td> <td>77.41</td> </tr> <tr> <td>MBARTRuSumGazeta</td> <td>74.70</td> <td>3</td> <td>76.06</td> <td>68.95</td> <td>73.04</td> <td>72.34</td> <td>71.93</td> <td>77.83</td> <td>76.71</td> <td>73.56</td> <td>74.18</td> <td>60.54</td> <td>87.22</td> <td>77.51</td> </tr> <tr> <td>Conversational RuBERT</td> <td>74.44</td> <td>4</td> <td>76.69</td> <td>69.09</td> <td>73.11</td> <td>69.44</td> <td>68.68</td> <td>75.56</td> <td>77.31</td> <td>74.40</td> <td>73.10</td> <td>59.95</td> <td>87.86</td> <td>77.78</td> </tr> <tr> <td>LaBSE</td> <td>74.11</td> <td>5</td> <td>77.00</td> <td>69.19</td> <td>73.55</td> <td>70.34</td> <td>69.83</td> <td>76.38</td> <td>74.94</td> <td>70.84</td> <td>73.20</td> <td>59.52</td> <td>87.89</td> <td>78.47</td> </tr> <tr> <td>XLM-RoBERTa-Base</td> <td>73.60</td> <td>6</td> <td>76.35</td> <td>69.37</td> <td>73.42</td> <td>68.45</td> <td>67.45</td> <td>74.05</td> <td>74.26</td> <td>70.44</td> <td>71.40</td> <td>60.19</td> <td>87.90</td> <td>78.28</td> </tr> <tr> <td>RuBERT</td> <td>73.45</td> <td>7</td> <td>74.03</td> <td>66.14</td> <td>70.75</td> <td>66.46</td> <td>66.40</td> <td>73.37</td> <td>75.49</td> <td>71.86</td> <td>72.15</td> <td>60.55</td> <td>86.99</td> <td>77.41</td> </tr> <tr> <td>MBART-50-Large-Many-to-Many</td> <td>73.15</td> <td>8</td> <td>75.38</td> <td>67.81</td> <td>72.26</td> <td>67.13</td> <td>66.97</td> <td>73.85</td> <td>74.78</td> <td>70.98</td> <td>71.98</td> <td>59.20</td> <td>87.05</td> <td>77.24</td> </tr> <tr> <td>SlavicBERT</td> <td>71.96</td> <td>9</td> <td>71.45</td> <td>63.03</td> <td>68.44</td> <td>64.32</td> <td>63.99</td> <td>71.31</td> <td>72.13</td> <td>67.57</td> <td>72.54</td> <td>58.70</td> <td>86.43</td> <td>77.16</td> </tr> <tr> <td>EnRuDR-BERT</td> <td>71.51</td> <td>10</td> <td>72.56</td> <td>64.74</td> <td>69.07</td> <td>61.44</td> <td>60.21</td> <td>68.34</td> <td>74.19</td> <td>69.94</td> <td>69.33</td> <td>56.55</td> <td>87.12</td> <td>77.95</td> </tr> <tr> <td>RuDR-BERT</td> <td>71.14</td> <td>11</td> <td>72.79</td> <td>64.23</td> <td>68.36</td> <td>61.86</td> <td>60.92</td> <td>68.48</td> <td>74.65</td> <td>70.63</td> <td>68.74</td> <td>54.45</td> <td>87.04</td> <td>77.91</td> </tr> <tr> <td>MBART-50-Large</td> <td>69.46</td> <td>12</td> <td>70.91</td> <td>62.67</td> <td>67.24</td> <td>61.12</td> <td>60.25</td> <td>68.41</td> <td>72.88</td> <td>68.63</td> <td>70.52</td> <td>46.39</td> <td>86.48</td> <td>77.52</td> </tr> </tbody> </table> The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark. ## Citation If you find this repository helpful, feel free to cite our publication: ``` @article{Smetanin2021Deep, author = {Sergey Smetanin and Mikhail Komarov}, title = {Deep transfer learning baselines for sentiment analysis in Russian}, journal = {Information Processing & Management}, volume = {58}, number = {3}, pages = {102484}, year = {2021}, issn = {0306-4573}, doi = {0.1016/j.ipm.2020.102484} } ``` Dataset: ``` @INPROCEEDINGS{Smetanin2019Sentiment, author={Sergey Smetanin and Michail Komarov}, booktitle={2019 IEEE 21st Conference on Business Informatics (CBI)}, title={Sentiment Analysis of Product Reviews in Russian using Convolutional Neural Networks}, year={2019}, volume={01}, pages={482-486}, doi={10.1109/CBI.2019.00062}, ISSN={2378-1963}, month={July} } ```
sismetanin/sbert-ru-sentiment-krnd
sismetanin
2021-05-20T06:27:51Z
16
0
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "sentiment analysis", "Russian", "SBERT-Large", "ru", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - ru tags: - sentiment analysis - Russian - SBERT-Large --- ## SBERT-Large on Kaggle Russian News Dataset <table> <thead> <tr> <th rowspan="4">Model</th> <th rowspan="4">Score<br></th> <th rowspan="4">Rank</th> <th colspan="12">Dataset</th> </tr> <tr> <td colspan="6">SentiRuEval-2016<br></td> <td colspan="2" rowspan="2">RuSentiment</td> <td rowspan="2">KRND</td> <td rowspan="2">LINIS Crowd</td> <td rowspan="2">RuTweetCorp</td> <td rowspan="2">RuReviews</td> </tr> <tr> <td colspan="3">TC</td> <td colspan="3">Banks</td> </tr> <tr> <td>micro F<sub>1</sub></td> <td>macro F<sub>1</sub></td> <td>F<sub>1</sub></td> <td>micro F<sub>1</sub></td> <td>macro F<sub>1</sub></td> <td>F<sub>1</sub></td> <td>wighted F<sub>1</sub></td> <td>F<sub>1</sub></td> <td>F<sub>1</sub></td> <td>F<sub>1</sub></td> <td>F<sub>1</sub></td> <td>F<sub>1</sub></td> </tr> </thead> <tbody> <tr> <td>SOTA</td> <td>n/s</td> <td></td> <td>76.71</td> <td>66.40</td> <td>70.68</td> <td>67.51</td> <td>69.53</td> <td>74.06</td> <td>78.50</td> <td>n/s</td> <td>73.63</td> <td>60.51</td> <td>83.68</td> <td>77.44</td> </tr> <tr> <td>XLM-RoBERTa-Large</td> <td>76.37</td> <td>1</td> <td>82.26</td> <td>76.36</td> <td>79.42</td> <td>76.35</td> <td>76.08</td> <td>80.89</td> <td>78.31</td> <td>75.27</td> <td>75.17</td> <td>60.03</td> <td>88.91</td> <td>78.81</td> </tr> <tr> <td>SBERT-Large</td> <td>75.43</td> <td>2</td> <td>78.40</td> <td>71.36</td> <td>75.14</td> <td>72.39</td> <td>71.87</td> <td>77.72</td> <td>78.58</td> <td>75.85</td> <td>74.20</td> <td>60.64</td> <td>88.66</td> <td>77.41</td> </tr> <tr> <td>MBARTRuSumGazeta</td> <td>74.70</td> <td>3</td> <td>76.06</td> <td>68.95</td> <td>73.04</td> <td>72.34</td> <td>71.93</td> <td>77.83</td> <td>76.71</td> <td>73.56</td> <td>74.18</td> <td>60.54</td> <td>87.22</td> <td>77.51</td> </tr> <tr> <td>Conversational RuBERT</td> <td>74.44</td> <td>4</td> <td>76.69</td> <td>69.09</td> <td>73.11</td> <td>69.44</td> <td>68.68</td> <td>75.56</td> <td>77.31</td> <td>74.40</td> <td>73.10</td> <td>59.95</td> <td>87.86</td> <td>77.78</td> </tr> <tr> <td>LaBSE</td> <td>74.11</td> <td>5</td> <td>77.00</td> <td>69.19</td> <td>73.55</td> <td>70.34</td> <td>69.83</td> <td>76.38</td> <td>74.94</td> <td>70.84</td> <td>73.20</td> <td>59.52</td> <td>87.89</td> <td>78.47</td> </tr> <tr> <td>XLM-RoBERTa-Base</td> <td>73.60</td> <td>6</td> <td>76.35</td> <td>69.37</td> <td>73.42</td> <td>68.45</td> <td>67.45</td> <td>74.05</td> <td>74.26</td> <td>70.44</td> <td>71.40</td> <td>60.19</td> <td>87.90</td> <td>78.28</td> </tr> <tr> <td>RuBERT</td> <td>73.45</td> <td>7</td> <td>74.03</td> <td>66.14</td> <td>70.75</td> <td>66.46</td> <td>66.40</td> <td>73.37</td> <td>75.49</td> <td>71.86</td> <td>72.15</td> <td>60.55</td> <td>86.99</td> <td>77.41</td> </tr> <tr> <td>MBART-50-Large-Many-to-Many</td> <td>73.15</td> <td>8</td> <td>75.38</td> <td>67.81</td> <td>72.26</td> <td>67.13</td> <td>66.97</td> <td>73.85</td> <td>74.78</td> <td>70.98</td> <td>71.98</td> <td>59.20</td> <td>87.05</td> <td>77.24</td> </tr> <tr> <td>SlavicBERT</td> <td>71.96</td> <td>9</td> <td>71.45</td> <td>63.03</td> <td>68.44</td> <td>64.32</td> <td>63.99</td> <td>71.31</td> <td>72.13</td> <td>67.57</td> <td>72.54</td> <td>58.70</td> <td>86.43</td> <td>77.16</td> </tr> <tr> <td>EnRuDR-BERT</td> <td>71.51</td> <td>10</td> <td>72.56</td> <td>64.74</td> <td>69.07</td> <td>61.44</td> <td>60.21</td> <td>68.34</td> <td>74.19</td> <td>69.94</td> <td>69.33</td> <td>56.55</td> <td>87.12</td> <td>77.95</td> </tr> <tr> <td>RuDR-BERT</td> <td>71.14</td> <td>11</td> <td>72.79</td> <td>64.23</td> <td>68.36</td> <td>61.86</td> <td>60.92</td> <td>68.48</td> <td>74.65</td> <td>70.63</td> <td>68.74</td> <td>54.45</td> <td>87.04</td> <td>77.91</td> </tr> <tr> <td>MBART-50-Large</td> <td>69.46</td> <td>12</td> <td>70.91</td> <td>62.67</td> <td>67.24</td> <td>61.12</td> <td>60.25</td> <td>68.41</td> <td>72.88</td> <td>68.63</td> <td>70.52</td> <td>46.39</td> <td>86.48</td> <td>77.52</td> </tr> </tbody> </table>
sismetanin/rubert-toxic-pikabu-2ch
sismetanin
2021-05-20T06:16:03Z
305
8
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "toxic comments classification", "ru", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - ru tags: - toxic comments classification --- ## RuBERT-Toxic RuBERT-Toxic is a [RuBERT](https://huggingface.co/DeepPavlov/rubert-base-cased) model fine-tuned on [Kaggle Russian Language Toxic Comments Dataset](https://www.kaggle.com/blackmoon/russian-language-toxic-comments). You can find a detailed description of the data used and the fine-tuning process in [this article](http://doi.org/10.28995/2075-7182-2020-19-1149-1159). You can also find this information at [GitHub](https://github.com/sismetanin/toxic-comments-detection-in-russian). | System | P | R | F<sub>1</sub> | | ------------- | ------------- | ------------- | ------------- | | MNB-Toxic | 87.01% | 81.22% | 83.21% | | M-BERT<sub>Base</sub>-Toxic | 91.19% | 91.10% | 91.15% | | <b>RuBERT-Toxic</b> | <b>91.91%</b> | <b>92.51%</b> | <b>92.20%</b> | | M-USE<sub>CNN</sub>-Toxic | 89.69% | 90.14% | 89.91% | | M-USE<sub>Trans</sub>-Toxic | 90.85% | 91.92% | 91.35% | We fine-tuned two versions of Multilingual Universal Sentence Encoder (M-USE), Multilingual Bidirectional Encoder Representations from Transformers (M-BERT) and RuBERT for toxic comments detection in Russian. Fine-tuned RuBERT-Toxic achieved F<sub>1</sub> = 92.20%, demonstrating the best classification score. ## Toxic Comments Dataset [Kaggle Russian Language Toxic Comments Dataset](https://www.kaggle.com/blackmoon/russian-language-toxic-comments) is the collection of Russian-language annotated comments from [2ch](https://2ch.hk/) and [Pikabu](https://pikabu.ru/), which was published on Kaggle in 2019. It consists of 14412 comments, where 4826 texts were labelled as toxic, and 9586 were labelled as non-toxic. The average length of comments is ~175 characters; the minimum length is 21, and the maximum is 7403. ## Citation If you find this repository helpful, feel free to cite our publication: ``` @INPROCEEDINGS{Smetanin2020Toxic, author={Sergey Smetanin}, booktitle={Computational Linguistics and Intellectual Technologies: Proceedings of the International Conference “Dialogue 2020”}, title={Toxic Comments Detection in Russian}, year={2020}, doi={10.28995/2075-7182-2020-19-1149-1159} } ```
junnyu/bert_chinese_mc_base
junnyu
2021-05-20T05:28:56Z
8
3
transformers
[ "transformers", "pytorch", "jax", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
https://github.com/alibaba-research/ChineseBLUE
sarahlintang/IndoBERT
sarahlintang
2021-05-20T04:51:45Z
28
2
transformers
[ "transformers", "pytorch", "jax", "bert", "id", "dataset:oscar", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: id datasets: - oscar --- # IndoBERT (Indonesian BERT Model) ## Model description IndoBERT is a pre-trained language model based on BERT architecture for the Indonesian Language. This model is base-uncased version which use bert-base config. ## Intended uses & limitations #### How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("sarahlintang/IndoBERT") model = AutoModel.from_pretrained("sarahlintang/IndoBERT") tokenizer.encode("hai aku mau makan.") [2, 8078, 1785, 2318, 1946, 18, 4] ``` ## Training data This model was pre-trained on 16 GB of raw text ~2 B words from Oscar Corpus (https://oscar-corpus.com/). This model is equal to bert-base model which has 32,000 vocabulary size. ## Training procedure The training of the model has been performed using Google’s original Tensorflow code on eight core Google Cloud TPU v2. We used a Google Cloud Storage bucket, for persistent storage of training data and models. ## Eval results We evaluate this model on three Indonesian NLP downstream task: - some extractive summarization model - sentiment analysis - Part-of-Speech Tagger it was proven that this model outperforms multilingual BERT for all downstream tasks.
rohanrajpal/bert-base-codemixed-uncased-sentiment
rohanrajpal
2021-05-20T04:32:54Z
18
0
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "text-classification", "hi", "en", "codemix", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - hi - en tags: - hi - en - codemix datasets: - SAIL 2017 --- # Model name ## Model description I took a bert-base-multilingual-cased model from huggingface and finetuned it on SAIL 2017 dataset. ## Intended uses & limitations #### How to use ```python # You can include sample code which will be formatted #Coming soon! ``` #### Limitations and bias Provide examples of latent issues and potential remediations. ## Training data I trained on the SAIL 2017 dataset [link](http://amitavadas.com/SAIL/Data/SAIL_2017.zip) on this [pretrained model](https://huggingface.co/bert-base-multilingual-cased). ## Training procedure No preprocessing. ## Eval results ### BibTeX entry and citation info ```bibtex @inproceedings{khanuja-etal-2020-gluecos, title = "{GLUEC}o{S}: An Evaluation Benchmark for Code-Switched {NLP}", author = "Khanuja, Simran and Dandapat, Sandipan and Srinivasan, Anirudh and Sitaram, Sunayana and Choudhury, Monojit", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.acl-main.329", pages = "3575--3585" } ```
redewiedergabe/bert-base-historical-german-rw-cased
redewiedergabe
2021-05-20T04:11:23Z
27
3
transformers
[ "transformers", "pytorch", "jax", "bert", "fill-mask", "de", "arxiv:1508.01991", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: de --- # Model description ## Dataset Trained on fictional and non-fictional German texts written between 1840 and 1920: * Narrative texts from Digitale Bibliothek (https://textgrid.de/digitale-bibliothek) * Fairy tales and sagas from Grimm Korpus (https://www1.ids-mannheim.de/kl/projekte/korpora/archiv/gri.html) * Newspaper and magazine article from Mannheimer Korpus Historischer Zeitungen und Zeitschriften (https://repos.ids-mannheim.de/mkhz-beschreibung.html) * Magazine article from the journal „Die Grenzboten“ (http://www.deutschestextarchiv.de/doku/textquellen#grenzboten) * Fictional and non-fictional texts from Projekt Gutenberg (https://www.projekt-gutenberg.org) ## Hardware used 1 Tesla P4 GPU ## Hyperparameters | Parameter | Value | |-------------------------------|----------| | Epochs | 3 | | Gradient_accumulation_steps | 1 | | Train_batch_size | 32 | | Learning_rate | 0.00003 | | Max_seq_len | 128 | ## Evaluation results: Automatic tagging of four forms of speech/thought/writing representation in historical fictional and non-fictional German texts The language model was used in the task to tag direct, indirect, reported and free indirect speech/thought/writing representation in fictional and non-fictional German texts. The tagger is available and described in detail at https://github.com/redewiedergabe/tagger. The tagging model was trained using the SequenceTagger Class of the Flair framework ([Akbik et al., 2019](https://www.aclweb.org/anthology/N19-4010)) which implements a BiLSTM-CRF architecture on top of a language embedding (as proposed by [Huang et al. (2015)](https://arxiv.org/abs/1508.01991)). Hyperparameters | Parameter | Value | |-------------------------------|------------| | Hidden_size | 256 | | Learning_rate | 0.1 | | Mini_batch_size | 8 | | Max_epochs | 150 | Results are reported below in comparison to a custom trained flair embedding, which was stacked onto a custom trained fastText-model. Both models were trained on the same dataset. | | BERT ||| FastText+Flair |||Test data| |----------------|----------|-----------|----------|------|-----------|--------|--------| | | F1 | Precision | Recall | F1 | Precision | Recall || | Direct | 0.80 | 0.86 | 0.74 | 0.84 | 0.90 | 0.79 |historical German, fictional & non-fictional| | Indirect | **0.76** | **0.79** | **0.73** | 0.73 | 0.78 | 0.68 |historical German, fictional & non-fictional| | Reported | **0.58** | **0.69** | **0.51** | 0.56 | 0.68 | 0.48 |historical German, fictional & non-fictional| | Free indirect | **0.57** | **0.80** | **0.44** | 0.47 | 0.78 | 0.34 |modern German, fictional| ## Intended use: Historical German Texts (1840 to 1920) (Showed good performance with modern German fictional texts as well)
ahmedabdelali/bert-base-qarib60_860k
ahmedabdelali
2021-05-20T03:48:03Z
25
0
transformers
[ "transformers", "pytorch", "jax", "bert", "fill-mask", "tf", "bert-base-qarib60_860k", "qarib", "ar", "dataset:arabic_billion_words", "dataset:open_subtitles", "dataset:twitter", "arxiv:2102.10684", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: ar tags: - pytorch - tf - bert-base-qarib60_860k - qarib datasets: - arabic_billion_words - open_subtitles - twitter metrics: - f1 widget: - text: " شو عندكم يا [MASK] ." --- # QARiB: QCRI Arabic and Dialectal BERT ## About QARiB QCRI Arabic and Dialectal BERT (QARiB) model, was trained on a collection of ~ 420 Million tweets and ~ 180 Million sentences of text. For tweets, the data was collected using twitter API and using language filter. `lang:ar`. For text data, it was a combination from [Arabic GigaWord](url), [Abulkhair Arabic Corpus]() and [OPUS](http://opus.nlpl.eu/). ### bert-base-qarib60_860k - Data size: 60Gb - Number of Iterations: 860k - Loss: 2.2454472 ## Training QARiB The training of the model has been performed using Google’s original Tensorflow code on Google Cloud TPU v2. We used a Google Cloud Storage bucket, for persistent storage of training data and models. See more details in [Training QARiB](https://github.com/qcri/QARiB/blob/main/Training_QARiB.md) ## Using QARiB You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. For more details, see [Using QARiB](https://github.com/qcri/QARiB/blob/main/Using_QARiB.md) ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>>from transformers import pipeline >>>fill_mask = pipeline("fill-mask", model="./models/data60gb_86k") >>> fill_mask("شو عندكم يا [MASK]") [{'sequence': '[CLS] شو عندكم يا عرب [SEP]', 'score': 0.0990147516131401, 'token': 2355, 'token_str': 'عرب'}, {'sequence': '[CLS] شو عندكم يا جماعة [SEP]', 'score': 0.051633741706609726, 'token': 2308, 'token_str': 'جماعة'}, {'sequence': '[CLS] شو عندكم يا شباب [SEP]', 'score': 0.046871256083250046, 'token': 939, 'token_str': 'شباب'}, {'sequence': '[CLS] شو عندكم يا رفاق [SEP]', 'score': 0.03598872944712639, 'token': 7664, 'token_str': 'رفاق'}, {'sequence': '[CLS] شو عندكم يا ناس [SEP]', 'score': 0.031996358186006546, 'token': 271, 'token_str': 'ناس'}] >>> fill_mask("قللي وشفيييك يرحم [MASK]") [{'sequence': '[CLS] قللي وشفيييك يرحم والديك [SEP]', 'score': 0.4152909517288208, 'token': 9650, 'token_str': 'والديك'}, {'sequence': '[CLS] قللي وشفيييك يرحملي [SEP]', 'score': 0.07663793861865997, 'token': 294, 'token_str': '##لي'}, {'sequence': '[CLS] قللي وشفيييك يرحم حالك [SEP]', 'score': 0.0453166700899601, 'token': 2663, 'token_str': 'حالك'}, {'sequence': '[CLS] قللي وشفيييك يرحم امك [SEP]', 'score': 0.04390475153923035, 'token': 1942, 'token_str': 'امك'}, {'sequence': '[CLS] قللي وشفيييك يرحمونك [SEP]', 'score': 0.027349254116415977, 'token': 3283, 'token_str': '##ونك'}] >>> fill_mask("وقام المدير [MASK]") [ {'sequence': '[CLS] وقام المدير بالعمل [SEP]', 'score': 0.0678194984793663, 'token': 4230, 'token_str': 'بالعمل'}, {'sequence': '[CLS] وقام المدير بذلك [SEP]', 'score': 0.05191086605191231, 'token': 984, 'token_str': 'بذلك'}, {'sequence': '[CLS] وقام المدير بالاتصال [SEP]', 'score': 0.045264165848493576, 'token': 26096, 'token_str': 'بالاتصال'}, {'sequence': '[CLS] وقام المدير بعمله [SEP]', 'score': 0.03732728958129883, 'token': 40486, 'token_str': 'بعمله'}, {'sequence': '[CLS] وقام المدير بالامر [SEP]', 'score': 0.0246378555893898, 'token': 29124, 'token_str': 'بالامر'} ] >>> fill_mask("وقامت المديرة [MASK]") [{'sequence': '[CLS] وقامت المديرة بذلك [SEP]', 'score': 0.23992691934108734, 'token': 984, 'token_str': 'بذلك'}, {'sequence': '[CLS] وقامت المديرة بالامر [SEP]', 'score': 0.108805812895298, 'token': 29124, 'token_str': 'بالامر'}, {'sequence': '[CLS] وقامت المديرة بالعمل [SEP]', 'score': 0.06639821827411652, 'token': 4230, 'token_str': 'بالعمل'}, {'sequence': '[CLS] وقامت المديرة بالاتصال [SEP]', 'score': 0.05613093823194504, 'token': 26096, 'token_str': 'بالاتصال'}, {'sequence': '[CLS] وقامت المديرة المديرة [SEP]', 'score': 0.021778125315904617, 'token': 41635, 'token_str': 'المديرة'}] ``` ## Training procedure The training of the model has been performed using Google’s original Tensorflow code on eight core Google Cloud TPU v2. We used a Google Cloud Storage bucket, for persistent storage of training data and models. ## Eval results We evaluated QARiB models on five NLP downstream task: - Sentiment Analysis - Emotion Detection - Named-Entity Recognition (NER) - Offensive Language Detection - Dialect Identification The results obtained from QARiB models outperforms multilingual BERT/AraBERT/ArabicBERT. ## Model Weights and Vocab Download From Huggingface site: https://huggingface.co/qarib/bert-base-qarib60_860k ## Contacts Ahmed Abdelali, Sabit Hassan, Hamdy Mubarak, Kareem Darwish and Younes Samih ## Reference ``` @article{abdelali2021pretraining, title={Pre-Training BERT on Arabic Tweets: Practical Considerations}, author={Ahmed Abdelali and Sabit Hassan and Hamdy Mubarak and Kareem Darwish and Younes Samih}, year={2021}, eprint={2102.10684}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
ahmedabdelali/bert-base-qarib60_1790k
ahmedabdelali
2021-05-20T03:44:18Z
49
2
transformers
[ "transformers", "pytorch", "jax", "bert", "fill-mask", "tf", "qarib", "qarib60_1790k", "ar", "dataset:arabic_billion_words", "dataset:open_subtitles", "dataset:twitter", "arxiv:2102.10684", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: ar tags: - pytorch - tf - qarib - qarib60_1790k datasets: - arabic_billion_words - open_subtitles - twitter metrics: - f1 widget: - text: " شو عندكم يا [MASK] ." --- # QARiB: QCRI Arabic and Dialectal BERT ## About QARiB QCRI Arabic and Dialectal BERT (QARiB) model, was trained on a collection of ~ 420 Million tweets and ~ 180 Million sentences of text. For Tweets, the data was collected using twitter API and using language filter. `lang:ar`. For Text data, it was a combination from [Arabic GigaWord](url), [Abulkhair Arabic Corpus]() and [OPUS](http://opus.nlpl.eu/). ### bert-base-qarib60_1790k - Data size: 60Gb - Number of Iterations: 1790k - Loss: 1.8764963 ## Training QARiB The training of the model has been performed using Google’s original Tensorflow code on Google Cloud TPU v2. We used a Google Cloud Storage bucket, for persistent storage of training data and models. See more details in [Training QARiB](https://github.com/qcri/QARIB/Training_QARiB.md) ## Using QARiB You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. For more details, see [Using QARiB](https://github.com/qcri/QARIB/Using_QARiB.md) ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>>from transformers import pipeline >>>fill_mask = pipeline("fill-mask", model="./models/data60gb_86k") >>> fill_mask("شو عندكم يا [MASK]") [{'sequence': '[CLS] شو عندكم يا عرب [SEP]', 'score': 0.0990147516131401, 'token': 2355, 'token_str': 'عرب'}, {'sequence': '[CLS] شو عندكم يا جماعة [SEP]', 'score': 0.051633741706609726, 'token': 2308, 'token_str': 'جماعة'}, {'sequence': '[CLS] شو عندكم يا شباب [SEP]', 'score': 0.046871256083250046, 'token': 939, 'token_str': 'شباب'}, {'sequence': '[CLS] شو عندكم يا رفاق [SEP]', 'score': 0.03598872944712639, 'token': 7664, 'token_str': 'رفاق'}, {'sequence': '[CLS] شو عندكم يا ناس [SEP]', 'score': 0.031996358186006546, 'token': 271, 'token_str': 'ناس'}] >>> fill_mask("قللي وشفيييك يرحم [MASK]") [{'sequence': '[CLS] قللي وشفيييك يرحم والديك [SEP]', 'score': 0.4152909517288208, 'token': 9650, 'token_str': 'والديك'}, {'sequence': '[CLS] قللي وشفيييك يرحملي [SEP]', 'score': 0.07663793861865997, 'token': 294, 'token_str': '##لي'}, {'sequence': '[CLS] قللي وشفيييك يرحم حالك [SEP]', 'score': 0.0453166700899601, 'token': 2663, 'token_str': 'حالك'}, {'sequence': '[CLS] قللي وشفيييك يرحم امك [SEP]', 'score': 0.04390475153923035, 'token': 1942, 'token_str': 'امك'}, {'sequence': '[CLS] قللي وشفيييك يرحمونك [SEP]', 'score': 0.027349254116415977, 'token': 3283, 'token_str': '##ونك'}] >>> fill_mask("وقام المدير [MASK]") [ {'sequence': '[CLS] وقام المدير بالعمل [SEP]', 'score': 0.0678194984793663, 'token': 4230, 'token_str': 'بالعمل'}, {'sequence': '[CLS] وقام المدير بذلك [SEP]', 'score': 0.05191086605191231, 'token': 984, 'token_str': 'بذلك'}, {'sequence': '[CLS] وقام المدير بالاتصال [SEP]', 'score': 0.045264165848493576, 'token': 26096, 'token_str': 'بالاتصال'}, {'sequence': '[CLS] وقام المدير بعمله [SEP]', 'score': 0.03732728958129883, 'token': 40486, 'token_str': 'بعمله'}, {'sequence': '[CLS] وقام المدير بالامر [SEP]', 'score': 0.0246378555893898, 'token': 29124, 'token_str': 'بالامر'} ] >>> fill_mask("وقامت المديرة [MASK]") [{'sequence': '[CLS] وقامت المديرة بذلك [SEP]', 'score': 0.23992691934108734, 'token': 984, 'token_str': 'بذلك'}, {'sequence': '[CLS] وقامت المديرة بالامر [SEP]', 'score': 0.108805812895298, 'token': 29124, 'token_str': 'بالامر'}, {'sequence': '[CLS] وقامت المديرة بالعمل [SEP]', 'score': 0.06639821827411652, 'token': 4230, 'token_str': 'بالعمل'}, {'sequence': '[CLS] وقامت المديرة بالاتصال [SEP]', 'score': 0.05613093823194504, 'token': 26096, 'token_str': 'بالاتصال'}, {'sequence': '[CLS] وقامت المديرة المديرة [SEP]', 'score': 0.021778125315904617, 'token': 41635, 'token_str': 'المديرة'}] ``` ## Training procedure The training of the model has been performed using Google’s original Tensorflow code on eight core Google Cloud TPU v2. We used a Google Cloud Storage bucket, for persistent storage of training data and models. ## Eval results We evaluated QARiB models on five NLP downstream task: - Sentiment Analysis - Emotion Detection - Named-Entity Recognition (NER) - Offensive Language Detection - Dialect Identification The results obtained from QARiB models outperforms multilingual BERT/AraBERT/ArabicBERT. ## Model Weights and Vocab Download From Huggingface site: https://huggingface.co/qarib/qarib/bert-base-qarib60_1790k ## Contacts Ahmed Abdelali, Sabit Hassan, Hamdy Mubarak, Kareem Darwish and Younes Samih ## Reference ``` @article{abdelali2021pretraining, title={Pre-Training BERT on Arabic Tweets: Practical Considerations}, author={Ahmed Abdelali and Sabit Hassan and Hamdy Mubarak and Kareem Darwish and Younes Samih}, year={2021}, eprint={2102.10684}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
ahmedabdelali/bert-base-qarib
ahmedabdelali
2021-05-20T03:42:19Z
1,216
9
transformers
[ "transformers", "pytorch", "jax", "bert", "fill-mask", "tf", "QARiB", "qarib", "ar", "dataset:arabic_billion_words", "dataset:open_subtitles", "dataset:twitter", "arxiv:2102.10684", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: ar tags: - pytorch - tf - QARiB - qarib datasets: - arabic_billion_words - open_subtitles - twitter metrics: - f1 widget: - text: " شو عندكم يا [MASK] ." --- # QARiB: QCRI Arabic and Dialectal BERT ## About QARiB QCRI Arabic and Dialectal BERT (QARiB) model, was trained on a collection of ~ 420 Million tweets and ~ 180 Million sentences of text. For the tweets, the data was collected using twitter API and using language filter. `lang:ar`. For the text data, it was a combination from [Arabic GigaWord](url), [Abulkhair Arabic Corpus]() and [OPUS](http://opus.nlpl.eu/). QARiB: Is the Arabic name for "Boat". ## Model and Parameters: - Data size: 14B tokens - Vocabulary: 64k - Iterations: 10M - Number of Layers: 12 ## Training QARiB See details in [Training QARiB](https://github.com/qcri/QARIB/Training_QARiB.md) ## Using QARiB You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. For more details, see [Using QARiB](https://github.com/qcri/QARIB/Using_QARiB.md) ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>>from transformers import pipeline >>>fill_mask = pipeline("fill-mask", model="./models/data60gb_86k") >>> fill_mask("شو عندكم يا [MASK]") [{'sequence': '[CLS] شو عندكم يا عرب [SEP]', 'score': 0.0990147516131401, 'token': 2355, 'token_str': 'عرب'}, {'sequence': '[CLS] شو عندكم يا جماعة [SEP]', 'score': 0.051633741706609726, 'token': 2308, 'token_str': 'جماعة'}, {'sequence': '[CLS] شو عندكم يا شباب [SEP]', 'score': 0.046871256083250046, 'token': 939, 'token_str': 'شباب'}, {'sequence': '[CLS] شو عندكم يا رفاق [SEP]', 'score': 0.03598872944712639, 'token': 7664, 'token_str': 'رفاق'}, {'sequence': '[CLS] شو عندكم يا ناس [SEP]', 'score': 0.031996358186006546, 'token': 271, 'token_str': 'ناس'} ] >>> fill_mask("وقام المدير [MASK]") [ {'sequence': '[CLS] وقام المدير بالعمل [SEP]', 'score': 0.0678194984793663, 'token': 4230, 'token_str': 'بالعمل'}, {'sequence': '[CLS] وقام المدير بذلك [SEP]', 'score': 0.05191086605191231, 'token': 984, 'token_str': 'بذلك'}, {'sequence': '[CLS] وقام المدير بالاتصال [SEP]', 'score': 0.045264165848493576, 'token': 26096, 'token_str': 'بالاتصال'}, {'sequence': '[CLS] وقام المدير بعمله [SEP]', 'score': 0.03732728958129883, 'token': 40486, 'token_str': 'بعمله'}, {'sequence': '[CLS] وقام المدير بالامر [SEP]', 'score': 0.0246378555893898, 'token': 29124, 'token_str': 'بالامر'} ] >>> fill_mask("وقامت المديرة [MASK]") [{'sequence': '[CLS] وقامت المديرة بذلك [SEP]', 'score': 0.23992691934108734, 'token': 984, 'token_str': 'بذلك'}, {'sequence': '[CLS] وقامت المديرة بالامر [SEP]', 'score': 0.108805812895298, 'token': 29124, 'token_str': 'بالامر'}, {'sequence': '[CLS] وقامت المديرة بالعمل [SEP]', 'score': 0.06639821827411652, 'token': 4230, 'token_str': 'بالعمل'}, {'sequence': '[CLS] وقامت المديرة بالاتصال [SEP]', 'score': 0.05613093823194504, 'token': 26096, 'token_str': 'بالاتصال'}, {'sequence': '[CLS] وقامت المديرة المديرة [SEP]', 'score': 0.021778125315904617, 'token': 41635, 'token_str': 'المديرة'}] >>> fill_mask("قللي وشفيييك يرحم [MASK]") [{'sequence': '[CLS] قللي وشفيييك يرحم والديك [SEP]', 'score': 0.4152909517288208, 'token': 9650, 'token_str': 'والديك'}, {'sequence': '[CLS] قللي وشفيييك يرحملي [SEP]', 'score': 0.07663793861865997, 'token': 294, 'token_str': '##لي'}, {'sequence': '[CLS] قللي وشفيييك يرحم حالك [SEP]', 'score': 0.0453166700899601, 'token': 2663, 'token_str': 'حالك'}, {'sequence': '[CLS] قللي وشفيييك يرحم امك [SEP]', 'score': 0.04390475153923035, 'token': 1942, 'token_str': 'امك'}, {'sequence': '[CLS] قللي وشفيييك يرحمونك [SEP]', 'score': 0.027349254116415977, 'token': 3283, 'token_str': '##ونك'}] ``` ## Evaluations: |**Experiment** |**mBERT**|**AraBERT0.1**|**AraBERT1.0**|**ArabicBERT**|**QARiB**| |---------------|---------|--------------|--------------|--------------|---------| |Dialect Identification | 6.06% | 59.92% | 59.85% | 61.70% | **65.21%** | |Emotion Detection | 27.90% | 43.89% | 42.37% | 41.65% | **44.35%** | |Named-Entity Recognition (NER) | 49.38% | 64.97% | **66.63%** | 64.04% | 61.62% | |Offensive Language Detection | 83.14% | 88.07% | 88.97% | 88.19% | **91.94%** | |Sentiment Analysis | 86.61% | 90.80% | **93.58%** | 83.27% | 93.31% | ## Model Weights and Vocab Download From Huggingface site: https://huggingface.co/qarib/bert-base-qarib ## Contacts Ahmed Abdelali, Sabit Hassan, Hamdy Mubarak, Kareem Darwish and Younes Samih ## Reference ``` @article{abdelali2021pretraining, title={Pre-Training BERT on Arabic Tweets: Practical Considerations}, author={Ahmed Abdelali and Sabit Hassan and Hamdy Mubarak and Kareem Darwish and Younes Samih}, year={2021}, eprint={2102.10684}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
pucpr/bioBERTpt-squad-v1.1-portuguese
pucpr
2021-05-20T03:08:26Z
29
8
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "question-answering", "bioBERTpt", "pt", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: pt tags: - question-answering - bert - bioBERTpt - pytorch metrics: - squad widget: - text: "O que é AVC?" context: "O AVC (Acidente vascular cerebral) é a segunda principal causa de morte no Brasil e a principal causa de incapacidade em adultos, retirando do mercado de trabalho milhares de brasileiros. A cada 5 minutos ocorre uma morte por AVC em nosso país. Ele é uma alteração súbita na circulação de sangue em alguma região encéfalo (composto pelo cérebro, cerebelo e tronco encefálico)." - text: "O que significa a sigla AVC?" context: "O AVC (Acidente vascular cerebral) é a segunda principal causa de morte no Brasil e a principal causa de incapacidade em adultos, retirando do mercado de trabalho milhares de brasileiros. A cada 5 minutos ocorre uma morte por AVC em nosso país. Ele é uma alteração súbita na circulação de sangue em alguma região encéfalo (composto pelo cérebro, cerebelo e tronco encefálico)." - text: "Do que a região do encéfalo é composta?" context: "O AVC (Acidente vascular cerebral) é a segunda principal causa de morte no Brasil e a principal causa de incapacidade em adultos, retirando do mercado de trabalho milhares de brasileiros. A cada 5 minutos ocorre uma morte por AVC em nosso país. Ele é uma alteração súbita na circulação de sangue em alguma região encéfalo (composto pelo cérebro, cerebelo e tronco encefálico)." - text: "O que causa a interrupção do oxigênio?" context: "O oxigênio é um elemento essencial para a atividade normal do nosso corpo; ele juntamente com os nutrientes são transportados pelo sangue, através das nossas artérias, estas funcionam como mangueiras direcionando o sangue para regiões específicas. Quando esse transporte é impedido e o oxigênio não chega as áreas necessárias parte do encéfalo não consegue obter o sangue (e oxigênio) de que precisa, então ele e as células sofrem lesão ou morrem. Essa interrupção pode ser causada por duas razões, um entupimento ou um vazamento nas artérias. desta forma temos dois tipos de AVC." --- # BioBERTpt-squad-v1.1-portuguese for QA (Question Answering) This is a clinical and biomedical model trained with generic QA questions. This model was finetuned on SQUAD v1.1, with the dataset SQUAD v1.1 in portuguese, from the Deep Learning Brasil group on Google Colab. See more details [here](https://huggingface.co/pierreguillou/bert-base-cased-squad-v1.1-portuguese). ## Performance The results obtained are the following: ``` f1 = 80.06 exact match = 67.52 ``` ## See more Our repo: https://github.com/HAILab-PUCPR/
pin/analytical
pin
2021-05-20T02:44:25Z
4
0
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "danish", "sentiment", "analytical", "da", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: da tags: - danish - bert - sentiment - analytical license: cc-by-4.0 widget: - text: "Jeg synes, det er en elendig film" --- # Danish BERT fine-tuned for Detecting 'Analytical' This model detects if a Danish text is 'subjective' or 'objective'. It is trained and tested on Tweets and texts transcribed from the European Parliament annotated by [Alexandra Institute](https://github.com/alexandrainst). The model is trained with the [`senda`](https://github.com/ebanalyse/senda) package. Here is an example of how to load the model in PyTorch using the [🤗Transformers](https://github.com/huggingface/transformers) library: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline tokenizer = AutoTokenizer.from_pretrained("pin/analytical") model = AutoModelForSequenceClassification.from_pretrained("pin/analytical") # create 'senda' sentiment analysis pipeline analytical_pipeline = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer) text = "Jeg synes, det er en elendig film" # in English: 'I think, it is a terrible movie' analytical_pipeline(text) ``` ## Performance The `senda` model achieves an accuracy of 0.89 and a macro-averaged F1-score of 0.78 on a small test data set, that [Alexandra Institute](https://github.com/alexandrainst/danlp/blob/master/docs/docs/datasets.md#twitter-sentiment) provides. The model can most certainly be improved, and we encourage all NLP-enthusiasts to give it their best shot - you can use the [`senda`](https://github.com/ebanalyse/senda) package to do this. #### Contact Feel free to contact author Lars Kjeldgaard on [[email protected]](mailto:[email protected]).
phiyodr/bert-large-finetuned-squad2
phiyodr
2021-05-20T02:36:12Z
20,952
0
transformers
[ "transformers", "pytorch", "jax", "bert", "question-answering", "en", "dataset:squad2", "arxiv:1810.04805", "arxiv:1806.03822", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: en tags: - pytorch - question-answering datasets: - squad2 metrics: - exact - f1 widget: - text: "What discipline did Winkelmann create?" context: "Johann Joachim Winckelmann was a German art historian and archaeologist. He was a pioneering Hellenist who first articulated the difference between Greek, Greco-Roman and Roman art. The prophet and founding hero of modern archaeology, Winckelmann was one of the founders of scientific archaeology and first applied the categories of style on a large, systematic basis to the history of art." --- # bert-large-finetuned-squad2 ## Model description This model is based on **[bert-large-uncased](https://huggingface.co/bert-large-uncased)** and was finetuned on **[SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/)**. The corresponding papers you can found [here (model)](https://arxiv.org/abs/1810.04805) and [here (data)](https://arxiv.org/abs/1806.03822). ## How to use ```python from transformers.pipelines import pipeline model_name = "phiyodr/bert-large-finetuned-squad2" nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) inputs = { 'question': 'What discipline did Winkelmann create?', 'context': 'Johann Joachim Winckelmann was a German art historian and archaeologist. He was a pioneering Hellenist who first articulated the difference between Greek, Greco-Roman and Roman art. "The prophet and founding hero of modern archaeology", Winckelmann was one of the founders of scientific archaeology and first applied the categories of style on a large, systematic basis to the history of art. ' } nlp(inputs) ``` ## Training procedure ``` { "base_model": "bert-large-uncased", "do_lower_case": True, "learning_rate": 3e-5, "num_train_epochs": 4, "max_seq_length": 384, "doc_stride": 128, "max_query_length": 64, "batch_size": 96 } ``` ## Eval results - Data: [dev-v2.0.json](https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json) - Script: [evaluate-v2.0.py](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/) (original script from [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/README.md)) ``` { "exact": 76.22336393497852, "f1": 79.72527570261339, "total": 11873, "HasAns_exact": 76.19770580296895, "HasAns_f1": 83.21157193271408, "HasAns_total": 5928, "NoAns_exact": 76.24894869638352, "NoAns_f1": 76.24894869638352, "NoAns_total": 5945 } ```
phiyodr/bert-base-finetuned-squad2
phiyodr
2021-05-20T02:34:19Z
94
2
transformers
[ "transformers", "pytorch", "jax", "bert", "question-answering", "en", "dataset:squad2", "arxiv:1810.04805", "arxiv:1806.03822", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: en tags: - pytorch - question-answering datasets: - squad2 metrics: - exact - f1 widget: - text: "What discipline did Winkelmann create?" context: "Johann Joachim Winckelmann was a German art historian and archaeologist. He was a pioneering Hellenist who first articulated the difference between Greek, Greco-Roman and Roman art. The prophet and founding hero of modern archaeology, Winckelmann was one of the founders of scientific archaeology and first applied the categories of style on a large, systematic basis to the history of art." --- # bert-base-finetuned-squad2 ## Model description This model is based on **[bert-base-uncased](https://huggingface.co/bert-base-uncased)** and was finetuned on **[SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/)**. The corresponding papers you can found [here (model)](https://arxiv.org/abs/1810.04805) and [here (data)](https://arxiv.org/abs/1806.03822). ## How to use ```python from transformers.pipelines import pipeline model_name = "phiyodr/bert-base-finetuned-squad2" nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) inputs = { 'question': 'What discipline did Winkelmann create?', 'context': 'Johann Joachim Winckelmann was a German art historian and archaeologist. He was a pioneering Hellenist who first articulated the difference between Greek, Greco-Roman and Roman art. "The prophet and founding hero of modern archaeology", Winckelmann was one of the founders of scientific archaeology and first applied the categories of style on a large, systematic basis to the history of art. ' } nlp(inputs) ``` ## Training procedure ``` { "base_model": "bert-base-uncased", "do_lower_case": True, "learning_rate": 3e-5, "num_train_epochs": 4, "max_seq_length": 384, "doc_stride": 128, "max_query_length": 64, "batch_size": 96 } ``` ## Eval results - Data: [dev-v2.0.json](https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json) - Script: [evaluate-v2.0.py](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/) (original script from [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/README.md)) ``` { "exact": 70.3950138970774, "f1": 73.90527661873521, "total": 11873, "HasAns_exact": 71.4574898785425, "HasAns_f1": 78.48808186475087, "HasAns_total": 5928, "NoAns_exact": 69.33557611438184, "NoAns_f1": 69.33557611438184, "NoAns_total": 5945 } ```
olastor/mcn-en-smm4h
olastor
2021-05-20T02:11:39Z
12
1
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# BERT MCN-Model using SMM4H 2017 (subtask 3) data The model was trained using [clagator/biobert_v1.1_pubmed_nli_sts](https://huggingface.co/clagator/biobert_v1.1_pubmed_nli_sts) as a base and the smm4h dataset from 2017 from subtask 3. ## Dataset See [here](https://github.com/olastor/medical-concept-normalization/tree/main/data/smm4h) for the scripts and datasets. **Attribution** Sarker, Abeed (2018), “Data and systems for medication-related text classification and concept normalization from Twitter: Insights from the Social Media Mining for Health (SMM4H)-2017 shared task”, Mendeley Data, V2, doi: 10.17632/rxwfb3tysd.2 ### Test Results - Acc: 89.44 - Acc@2: 91.84 - Acc@3: 93.20 - Acc@5: 94.32 - Acc@10: 95.04 Acc@N denotes the accuracy taking the top N predictions of the model into account, not just the first one.
noahjadallah/cause-effect-detection
noahjadallah
2021-05-20T02:01:13Z
53
6
transformers
[ "transformers", "pytorch", "jax", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- widget: - text: "If a user signs up, he will receive a confirmation email." --- # Cause-Effect Detection for Software Requirements Based on Token Classification with BERT This model uses BERT to detect cause and effect from a single sentence. The focus of this model is the domain of software requirements engineering, however, it can also be used for other domains. The model outputs one of the following 5 labels for each token: Other B-Cause I-Cause B-Effect I-Effect The source code can be found here: https://colab.research.google.com/drive/14V9Ooy3aNPsRfTK88krwsereia8cfSPc?usp=sharing
nimaafshar/parsbert-fa-sentiment-twitter
nimaafshar
2021-05-20T01:50:49Z
14
1
transformers
[ "transformers", "tf", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
ParsBERT digikala sentiment analysis model fine-tuned on around 600,000 Persian tweets. # How to use at least you need 650 megabytes of ram and disk in order to load the model. tensorflow, transformers and numpy library ## Loading model ```python import numpy as np from transformers import AutoTokenizer, TFAutoModelForSequenceClassification #loading model tokenizer = AutoTokenizer.from_pretrained("nimaafshar/parsbert-fa-sentiment-twitter") model = TFAutoModelForSequenceClassification.from_pretrained("nimaafshar/parsbert-fa-sentiment-twitter") classes = ["negative","neutral","positive"] ``` ## Using Model ```python #using model sequences = [".غذا خیلی افتضاح بود متاسفم برای مدیریت رستورن خیلی بد بود.", "خیلی خوشمزده و عالی بود عالی", "می‌تونم اسمتونو بپرسم؟" ] for sequence in sequences: inputs = tokenizer(sequence, return_tensors="tf") classification_logits = model(inputs)[0] results = tf.nn.softmax(classification_logits, axis=1).numpy()[0] print(classes[np.argmax(results)]) percentages = np.around(results*100) print(percentages) ``` note that this model is trained on persian corpus and is meant to be used on persian texts too.
nikunjbjj/jd-resume-model
nikunjbjj
2021-05-20T01:50:03Z
7
0
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# Sentiment Analysis in Spanish ## beto-sentiment-analysis Repository: [https://github.com/finiteautomata/pysentimiento/](https://github.com/finiteautomata/pysentimiento/) Model trained with TASS 2020 corpus (around ~5k tweets) of several dialects of Spanish. Base model is [BETO](https://github.com/dccuchile/beto), a BERT model trained in Spanish. Uses `POS`, `NEG`, `NEU` labels. **Coming soon**: a brief paper describing the model and training. Enjoy! 🤗
neuralmind/bert-large-portuguese-cased
neuralmind
2021-05-20T01:31:09Z
222,365
66
transformers
[ "transformers", "pytorch", "jax", "bert", "fill-mask", "pt", "dataset:brWaC", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: pt license: mit tags: - bert - pytorch datasets: - brWaC --- # BERTimbau Large (aka "bert-large-portuguese-cased") ![Bert holding a berimbau](https://imgur.com/JZ7Hynh.jpg) ## Introduction BERTimbau Large is a pretrained BERT model for Brazilian Portuguese that achieves state-of-the-art performances on three downstream NLP tasks: Named Entity Recognition, Sentence Textual Similarity and Recognizing Textual Entailment. It is available in two sizes: Base and Large. For further information or requests, please go to [BERTimbau repository](https://github.com/neuralmind-ai/portuguese-bert/). ## Available models | Model | Arch. | #Layers | #Params | | ---------------------------------------- | ---------- | ------- | ------- | | `neuralmind/bert-base-portuguese-cased` | BERT-Base | 12 | 110M | | `neuralmind/bert-large-portuguese-cased` | BERT-Large | 24 | 335M | ## Usage ```python from transformers import AutoTokenizer # Or BertTokenizer from transformers import AutoModelForPreTraining # Or BertForPreTraining for loading pretraining heads from transformers import AutoModel # or BertModel, for BERT without pretraining heads model = AutoModelForPreTraining.from_pretrained('neuralmind/bert-large-portuguese-cased') tokenizer = AutoTokenizer.from_pretrained('neuralmind/bert-large-portuguese-cased', do_lower_case=False) ``` ### Masked language modeling prediction example ```python from transformers import pipeline pipe = pipeline('fill-mask', model=model, tokenizer=tokenizer) pipe('Tinha uma [MASK] no meio do caminho.') # [{'score': 0.5054386258125305, # 'sequence': '[CLS] Tinha uma pedra no meio do caminho. [SEP]', # 'token': 5028, # 'token_str': 'pedra'}, # {'score': 0.05616172030568123, # 'sequence': '[CLS] Tinha uma curva no meio do caminho. [SEP]', # 'token': 9562, # 'token_str': 'curva'}, # {'score': 0.02348282001912594, # 'sequence': '[CLS] Tinha uma parada no meio do caminho. [SEP]', # 'token': 6655, # 'token_str': 'parada'}, # {'score': 0.01795753836631775, # 'sequence': '[CLS] Tinha uma mulher no meio do caminho. [SEP]', # 'token': 2606, # 'token_str': 'mulher'}, # {'score': 0.015246033668518066, # 'sequence': '[CLS] Tinha uma luz no meio do caminho. [SEP]', # 'token': 3377, # 'token_str': 'luz'}] ``` ### For BERT embeddings ```python import torch model = AutoModel.from_pretrained('neuralmind/bert-large-portuguese-cased') input_ids = tokenizer.encode('Tinha uma pedra no meio do caminho.', return_tensors='pt') with torch.no_grad(): outs = model(input_ids) encoded = outs[0][0, 1:-1] # Ignore [CLS] and [SEP] special tokens # encoded.shape: (8, 1024) # tensor([[ 1.1872, 0.5606, -0.2264, ..., 0.0117, -0.1618, -0.2286], # [ 1.3562, 0.1026, 0.1732, ..., -0.3855, -0.0832, -0.1052], # [ 0.2988, 0.2528, 0.4431, ..., 0.2684, -0.5584, 0.6524], # ..., # [ 0.3405, -0.0140, -0.0748, ..., 0.6649, -0.8983, 0.5802], # [ 0.1011, 0.8782, 0.1545, ..., -0.1768, -0.8880, -0.1095], # [ 0.7912, 0.9637, -0.3859, ..., 0.2050, -0.1350, 0.0432]]) ``` ## Citation If you use our work, please cite: ```bibtex @inproceedings{souza2020bertimbau, author = {F{\'a}bio Souza and Rodrigo Nogueira and Roberto Lotufo}, title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese}, booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)}, year = {2020} } ```
napsternxg/scibert_scivocab_uncased_ft_tv_SDU21_AI
napsternxg
2021-05-20T01:11:49Z
4
0
transformers
[ "transformers", "pytorch", "jax", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
scibert_scivocab_uncased_ft_tv MLM pretrained on SDU21 Task 1 + 2
napsternxg/scibert_scivocab_uncased_ft_mlm_SDU21_AI
napsternxg
2021-05-20T01:10:55Z
3
0
transformers
[ "transformers", "pytorch", "jax", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
scibert_scivocab_uncased_ft_mlm MLM pretrained on SDU21 Task 1 + 2
napsternxg/scibert_scivocab_uncased_SDU21_AI
napsternxg
2021-05-20T01:09:06Z
6
0
transformers
[ "transformers", "pytorch", "jax", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
scibert_scivocab_uncased submission for SDU21 Task 1 AI
murali1996/bert-base-cased-spell-correction
murali1996
2021-05-20T01:04:57Z
36
7
transformers
[ "transformers", "pytorch", "jax", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
`bert-base-cased` trained for spelling correction. See [neuspell](https://github.com/neuspell/neuspell) repository for more details about training and evaluating the model.
mudes/en-base
mudes
2021-05-20T01:03:44Z
5
1
transformers
[ "transformers", "pytorch", "jax", "bert", "token-classification", "mudes", "en", "arxiv:2102.09665", "arxiv:2104.04630", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: en tags: - mudes license: apache-2.0 --- # MUDES - {Mu}ltilingual {De}tection of Offensive {S}pans We provide state-of-the-art models to detect toxic spans in social media texts. We introduce our framework in [this paper](https://arxiv.org/abs/2102.09665). We have evaluated our models on Toxic Spans task at SemEval 2021 (Task 5). Our participation in the task is detailed in [this paper](https://arxiv.org/abs/2104.04630). ## Usage You can use this model when you have [MUDES](https://github.com/TharinduDR/MUDES) installed: ```bash pip install mudes ``` Then you can use the model like this: ```python from mudes.app.mudes_app import MUDESApp app = MUDESApp("en-base", use_cuda=False) print(app.predict_toxic_spans("You motherfucking cunt", spans=True)) ``` ## System Demonstration An experimental demonstration interface called MUDES-UI has been released on [GitHub](https://github.com/TharinduDR/MUDES-UI) and can be checked out in [here](http://rgcl.wlv.ac.uk/mudes/). ## Citing & Authors If you find this model helpful, feel free to cite our publications ```bibtex @inproceedings{ranasinghemudes, title={{MUDES: Multilingual Detection of Offensive Spans}}, author={Tharindu Ranasinghe and Marcos Zampieri}, booktitle={Proceedings of NAACL}, year={2021} } ``` ```bibtex @inproceedings{ranasinghe2021semeval, title={{WLV-RIT at SemEval-2021 Task 5: A Neural Transformer Framework for Detecting Toxic Spans}}, author = {Ranasinghe, Tharindu and Sarkar, Diptanu and Zampieri, Marcos and Ororbia, Alex}, booktitle={Proceedings of SemEval}, year={2021} } ```
mrm8488/spanbert-large-finetuned-squadv2
mrm8488
2021-05-20T00:59:58Z
66
1
transformers
[ "transformers", "pytorch", "jax", "bert", "en", "arxiv:1907.10529", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en thumbnail: --- # SpanBERT large fine-tuned on SQuAD v2 [SpanBERT](https://github.com/facebookresearch/SpanBERT) created by [Facebook Research](https://github.com/facebookresearch) and fine-tuned on [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) for **Q&A** downstream task ([by them](https://github.com/facebookresearch/SpanBERT#finetuned-models-squad-1120-relation-extraction-coreference-resolution)). ## Details of SpanBERT [SpanBERT: Improving Pre-training by Representing and Predicting Spans](https://arxiv.org/abs/1907.10529) ## Details of the downstream task (Q&A) - Dataset 📚 🧐 ❓ [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD2.0 | train | 130k | | SQuAD2.0 | eval | 12.3k | ## Model fine-tuning 🏋️‍ You can get the fine-tuning script [here](https://github.com/facebookresearch/SpanBERT) ```bash python code/run_squad.py \ --do_train \ --do_eval \ --model spanbert-large-cased \ --train_file train-v2.0.json \ --dev_file dev-v2.0.json \ --train_batch_size 32 \ --eval_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 4 \ --max_seq_length 512 \ --doc_stride 128 \ --eval_metric best_f1 \ --output_dir squad2_output \ --version_2_with_negative \ --fp16 ``` ## Results Comparison 📝 | | SQuAD 1.1 | SQuAD 2.0 | Coref | TACRED | | ---------------------- | ------------- | --------- | ------- | ------ | | | F1 | F1 | avg. F1 | F1 | | BERT (base) | 88.5* | 76.5* | 73.1 | 67.7 | | SpanBERT (base) | [92.4*](https://huggingface.co/mrm8488/spanbert-base-finetuned-squadv1) | [83.6*](https://huggingface.co/mrm8488/spanbert-base-finetuned-squadv2) | 77.4 | [68.2](https://huggingface.co/mrm8488/spanbert-base-finetuned-tacred) | | BERT (large) | 91.3 | 83.3 | 77.1 | 66.4 | | SpanBERT (large) | [94.6](https://huggingface.co/mrm8488/spanbert-large-finetuned-squadv1) | **88.7** (this) | 79.6 | [70.8](https://huggingface.co/mrm8488/spanbert-large-finetuned-tacred) | Note: The numbers marked as * are evaluated on the development sets because those models were not submitted to the official SQuAD leaderboard. All the other numbers are test numbers. ## Model in action Fast usage with **pipelines**: ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="mrm8488/spanbert-large-finetuned-squadv2", tokenizer="SpanBERT/spanbert-large-cased" ) qa_pipeline({ 'context': "Manuel Romero has been working very hard in the repository hugginface/transformers lately", 'question': "How has been working Manuel Romero lately?" }) # Output: {'answer': 'very hard', 'end': 40, 'score': 0.9052708846768347, 'start': 31} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/spanbert-large-finetuned-squadv1
mrm8488
2021-05-20T00:58:31Z
10
0
transformers
[ "transformers", "pytorch", "jax", "bert", "en", "arxiv:1907.10529", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en thumbnail: --- # SpanBERT large fine-tuned on SQuAD v1 [SpanBERT](https://github.com/facebookresearch/SpanBERT) created by [Facebook Research](https://github.com/facebookresearch) and fine-tuned on [SQuAD 1.1](https://rajpurkar.github.io/SQuAD-explorer/explore/1.1/dev/) for **Q&A** downstream task ([by them](https://github.com/facebookresearch/SpanBERT#finetuned-models-squad-1120-relation-extraction-coreference-resolution)). ## Details of SpanBERT [SpanBERT: Improving Pre-training by Representing and Predicting Spans](https://arxiv.org/abs/1907.10529) ## Details of the downstream task (Q&A) - Dataset 📚 🧐 ❓ [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/) ## Model fine-tuning 🏋️‍ You can get the fine-tuning script [here](https://github.com/facebookresearch/SpanBERT) ```bash python code/run_squad.py \ --do_train \ --do_eval \ --model spanbert-large-cased \ --train_file train-v1.1.json \ --dev_file dev-v1.1.json \ --train_batch_size 32 \ --eval_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 4 \ --max_seq_length 512 \ --doc_stride 128 \ --eval_metric f1 \ --output_dir squad_output \ --fp16 ``` ## Results Comparison 📝 | | SQuAD 1.1 | SQuAD 2.0 | Coref | TACRED | | ---------------------- | ------------- | --------- | ------- | ------ | | | F1 | F1 | avg. F1 | F1 | | BERT (base) | 88.5* | 76.5* | 73.1 | 67.7 | | SpanBERT (base) | [92.4*](https://huggingface.co/mrm8488/spanbert-base-finetuned-squadv1) | [83.6*](https://huggingface.co/mrm8488/spanbert-base-finetuned-squadv2) | 77.4 | [68.2](https://huggingface.co/mrm8488/spanbert-base-finetuned-tacred) | | BERT (large) | 91.3 | 83.3 | 77.1 | 66.4 | | SpanBERT (large) | **94.6** (this) | [88.7](https://huggingface.co/mrm8488/spanbert-large-finetuned-squadv2) | 79.6 | [70.8](https://huggingface.co/mrm8488/spanbert-large-finetuned-tacred) | Note: The numbers marked as * are evaluated on the development sets because those models were not submitted to the official SQuAD leaderboard. All the other numbers are test numbers. ## Model in action Fast usage with **pipelines**: ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="mrm8488/spanbert-large-finetuned-squadv1", tokenizer="SpanBERT/spanbert-large-cased" ) qa_pipeline({ 'context': "Manuel Romero has been working very hard in the repository hugginface/transformers lately", 'question': "How has been working Manuel Romero lately?" }) # Output: {'answer': 'very hard in the repository hugginface/transformers', 'end': 82, 'score': 0.327230326857725, 'start': 31} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/spanbert-base-finetuned-tacred
mrm8488
2021-05-20T00:53:07Z
55
0
transformers
[ "transformers", "pytorch", "jax", "bert", "feature-extraction", "en", "arxiv:1907.10529", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: en thumbnail: --- # SpanBERT base fine-tuned on TACRED [SpanBERT](https://github.com/facebookresearch/SpanBERT) created by [Facebook Research](https://github.com/facebookresearch) and fine-tuned on [TACRED](https://nlp.stanford.edu/projects/tacred/) dataset by [them](https://github.com/facebookresearch/SpanBERT#finetuned-models-squad-1120-relation-extraction-coreference-resolution) ## Details of SpanBERT [SpanBERT: Improving Pre-training by Representing and Predicting Spans](https://arxiv.org/abs/1907.10529) ## Dataset 📚 [TACRED](https://nlp.stanford.edu/projects/tacred/) A large-scale relation extraction dataset with 106k+ examples over 42 TAC KBP relation types. ## Model fine-tuning 🏋️‍ You can get the fine-tuning script [here](https://github.com/facebookresearch/SpanBERT) ```bash python code/run_tacred.py \ --do_train \ --do_eval \ --data_dir <TACRED_DATA_DIR> \ --model spanbert-base-cased \ --train_batch_size 32 \ --eval_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 10 \ --max_seq_length 128 \ --output_dir tacred_dir \ --fp16 ``` ## Results Comparison 📝 | | SQuAD 1.1 | SQuAD 2.0 | Coref | TACRED | | ---------------------- | ------------- | --------- | ------- | ------ | | | F1 | F1 | avg. F1 | F1 | | BERT (base) | 88.5* | 76.5* | 73.1 | 67.7 | | SpanBERT (base) | [92.4*](https://huggingface.co/mrm8488/spanbert-base-finetuned-squadv1) | [83.6*](https://huggingface.co/mrm8488/spanbert-base-finetuned-squadv2) | 77.4 | **68.2** (this one) | | BERT (large) | 91.3 | 83.3 | 77.1 | 66.4 | | SpanBERT (large) | [94.6](https://huggingface.co/mrm8488/spanbert-large-finetuned-squadv1) | [88.7](https://huggingface.co/mrm8488/spanbert-large-finetuned-squadv2) | 79.6 | [70.8](https://huggingface.co/mrm8488/spanbert-base-finetuned-tacred) | Note: The numbers marked as * are evaluated on the development sets because those models were not submitted to the official SQuAD leaderboard. All the other numbers are test numbers. > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain