pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
listlengths
0
201
languages
listlengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
listlengths
0
722
processed_texts
listlengths
1
723
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Swedish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Swedish using the [Common Voice](https://huggingface.co/datasets/common_voice). The training data amounts to 402 MB. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]"). processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-swedish-common-voice") model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-swedish-common-voice") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the {language} test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "sv-SE", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-swedish-common-voice") model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-swedish-common-voice") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 36.91 % ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found [here](https://colab.research.google.com/drive/1KkD4PeZwnIwxxxOP1bUE7XTZMK7-SzRj?usp=sharing)
{"language": "sv", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "XLSR Wav2Vec2 Swedish by Birger Moell", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice sv-SE", "type": "common_voice", "args": "sv-SE"}, "metrics": [{"type": "wer", "value": 36.91, "name": "Test WER"}]}]}]}
birgermoell/wav2vec2-swedish-common-voice
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "sv", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "sv" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #sv #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# Wav2Vec2-Large-XLSR-53-Swedish Fine-tuned facebook/wav2vec2-large-xlsr-53 in Swedish using the Common Voice. The training data amounts to 402 MB. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the {language} test data of Common Voice. Test Result: 36.91 % ## Training The Common Voice 'train', 'validation' datasets were used for training. The script used for training can be found here
[ "# Wav2Vec2-Large-XLSR-53-Swedish\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Swedish using the Common Voice. The training data amounts to 402 MB.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the {language} test data of Common Voice.\n\n\n\n\nTest Result: 36.91 %", "## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training.\n\nThe script used for training can be found here" ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #sv #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# Wav2Vec2-Large-XLSR-53-Swedish\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Swedish using the Common Voice. The training data amounts to 402 MB.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the {language} test data of Common Voice.\n\n\n\n\nTest Result: 36.91 %", "## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training.\n\nThe script used for training can be found here" ]
text-classification
transformers
# Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 530615016 - CO2 Emissions (in grams): 2.2247356264808964 ## Validation Metrics - Loss: 0.7859578132629395 - Accuracy: 0.676854818831649 - Macro F1: 0.3297126297995653 - Micro F1: 0.676854818831649 - Weighted F1: 0.6429522696884535 - Macro Precision: 0.33152557743856437 - Micro Precision: 0.676854818831649 - Weighted Precision: 0.6276125515413322 - Macro Recall: 0.33784302289888885 - Micro Recall: 0.676854818831649 - Weighted Recall: 0.676854818831649 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/bitmorse/autonlp-ks-530615016 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("bitmorse/autonlp-ks-530615016", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("bitmorse/autonlp-ks-530615016", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
{"language": "en", "tags": "autonlp", "datasets": ["bitmorse/autonlp-data-ks"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 2.2247356264808964}
bitmorse/autonlp-ks-530615016
null
[ "transformers", "pytorch", "distilbert", "text-classification", "autonlp", "en", "dataset:bitmorse/autonlp-data-ks", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #distilbert #text-classification #autonlp #en #dataset-bitmorse/autonlp-data-ks #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
# Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 530615016 - CO2 Emissions (in grams): 2.2247356264808964 ## Validation Metrics - Loss: 0.7859578132629395 - Accuracy: 0.676854818831649 - Macro F1: 0.3297126297995653 - Micro F1: 0.676854818831649 - Weighted F1: 0.6429522696884535 - Macro Precision: 0.33152557743856437 - Micro Precision: 0.676854818831649 - Weighted Precision: 0.6276125515413322 - Macro Recall: 0.33784302289888885 - Micro Recall: 0.676854818831649 - Weighted Recall: 0.676854818831649 ## Usage You can use cURL to access this model: Or Python API:
[ "# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 530615016\n- CO2 Emissions (in grams): 2.2247356264808964", "## Validation Metrics\n\n- Loss: 0.7859578132629395\n- Accuracy: 0.676854818831649\n- Macro F1: 0.3297126297995653\n- Micro F1: 0.676854818831649\n- Weighted F1: 0.6429522696884535\n- Macro Precision: 0.33152557743856437\n- Micro Precision: 0.676854818831649\n- Weighted Precision: 0.6276125515413322\n- Macro Recall: 0.33784302289888885\n- Micro Recall: 0.676854818831649\n- Weighted Recall: 0.676854818831649", "## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:" ]
[ "TAGS\n#transformers #pytorch #distilbert #text-classification #autonlp #en #dataset-bitmorse/autonlp-data-ks #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 530615016\n- CO2 Emissions (in grams): 2.2247356264808964", "## Validation Metrics\n\n- Loss: 0.7859578132629395\n- Accuracy: 0.676854818831649\n- Macro F1: 0.3297126297995653\n- Micro F1: 0.676854818831649\n- Weighted F1: 0.6429522696884535\n- Macro Precision: 0.33152557743856437\n- Micro Precision: 0.676854818831649\n- Weighted Precision: 0.6276125515413322\n- Macro Recall: 0.33784302289888885\n- Micro Recall: 0.676854818831649\n- Weighted Recall: 0.676854818831649", "## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:" ]
feature-extraction
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # kickstarter-distilbert-model This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.16.2 - TensorFlow 2.7.0 - Datasets 1.18.2 - Tokenizers 0.11.0
{"tags": ["generated_from_keras_callback"], "model-index": [{"name": "kickstarter-distilbert-model", "results": []}]}
bitmorse/kickstarter-distilbert-model
null
[ "transformers", "pytorch", "tf", "distilbert", "feature-extraction", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tf #distilbert #feature-extraction #generated_from_keras_callback #endpoints_compatible #region-us
# kickstarter-distilbert-model This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.16.2 - TensorFlow 2.7.0 - Datasets 1.18.2 - Tokenizers 0.11.0
[ "# kickstarter-distilbert-model\n\nThis model was trained from scratch on an unknown dataset.\nIt achieves the following results on the evaluation set:", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: None\n- training_precision: float32", "### Training results", "### Framework versions\n\n- Transformers 4.16.2\n- TensorFlow 2.7.0\n- Datasets 1.18.2\n- Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #tf #distilbert #feature-extraction #generated_from_keras_callback #endpoints_compatible #region-us \n", "# kickstarter-distilbert-model\n\nThis model was trained from scratch on an unknown dataset.\nIt achieves the following results on the evaluation set:", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: None\n- training_precision: float32", "### Training results", "### Framework versions\n\n- Transformers 4.16.2\n- TensorFlow 2.7.0\n- Datasets 1.18.2\n- Tokenizers 0.11.0" ]
fill-mask
transformers
# AlephBERT ## Hebrew Language Model State-of-the-art language model for Hebrew. Based on Google's BERT architecture [(Devlin et al. 2018)](https://arxiv.org/abs/1810.04805). #### How to use ```python from transformers import BertModel, BertTokenizerFast alephbert_tokenizer = BertTokenizerFast.from_pretrained('onlplab/alephbert-base') alephbert = BertModel.from_pretrained('onlplab/alephbert-base') # if not finetuning - disable dropout alephbert.eval() ``` ## Training data 1. OSCAR [(Ortiz, 2019)](https://oscar-corpus.com/) Hebrew section (10 GB text, 20 million sentences). 2. Hebrew dump of [Wikipedia](https://dumps.wikimedia.org/hewiki/latest/) (650 MB text, 3 million sentences). 3. Hebrew Tweets collected from the Twitter sample stream (7 GB text, 70 million sentences). ## Training procedure Trained on a DGX machine (8 V100 GPUs) using the standard huggingface training procedure. Since the larger part of our training data is based on tweets we decided to start by optimizing using Masked Language Model loss only. To optimize training time we split the data into 4 sections based on max number of tokens: 1. num tokens < 32 (70M sentences) 2. 32 <= num tokens < 64 (12M sentences) 3. 64 <= num tokens < 128 (10M sentences) 4. 128 <= num tokens < 512 (1.5M sentences) Each section was first trained for 5 epochs with an initial learning rate set to 1e-4. Then each section was trained for another 5 epochs with an initial learning rate set to 1e-5, for a total of 10 epochs. Total training time was 8 days.
{"language": ["he"], "license": "apache-2.0", "tags": ["language model"], "datasets": ["oscar", "wikipedia", "twitter"]}
biu-nlp/alephbert-base
null
[ "transformers", "pytorch", "bert", "fill-mask", "language model", "he", "dataset:oscar", "dataset:wikipedia", "dataset:twitter", "arxiv:1810.04805", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1810.04805" ]
[ "he" ]
TAGS #transformers #pytorch #bert #fill-mask #language model #he #dataset-oscar #dataset-wikipedia #dataset-twitter #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# AlephBERT ## Hebrew Language Model State-of-the-art language model for Hebrew. Based on Google's BERT architecture (Devlin et al. 2018). #### How to use ## Training data 1. OSCAR (Ortiz, 2019) Hebrew section (10 GB text, 20 million sentences). 2. Hebrew dump of Wikipedia (650 MB text, 3 million sentences). 3. Hebrew Tweets collected from the Twitter sample stream (7 GB text, 70 million sentences). ## Training procedure Trained on a DGX machine (8 V100 GPUs) using the standard huggingface training procedure. Since the larger part of our training data is based on tweets we decided to start by optimizing using Masked Language Model loss only. To optimize training time we split the data into 4 sections based on max number of tokens: 1. num tokens < 32 (70M sentences) 2. 32 <= num tokens < 64 (12M sentences) 3. 64 <= num tokens < 128 (10M sentences) 4. 128 <= num tokens < 512 (1.5M sentences) Each section was first trained for 5 epochs with an initial learning rate set to 1e-4. Then each section was trained for another 5 epochs with an initial learning rate set to 1e-5, for a total of 10 epochs. Total training time was 8 days.
[ "# AlephBERT", "## Hebrew Language Model\n\nState-of-the-art language model for Hebrew.\nBased on Google's BERT architecture (Devlin et al. 2018).", "#### How to use", "## Training data\n1. OSCAR (Ortiz, 2019) Hebrew section (10 GB text, 20 million sentences).\n2. Hebrew dump of Wikipedia (650 MB text, 3 million sentences).\n3. Hebrew Tweets collected from the Twitter sample stream (7 GB text, 70 million sentences).", "## Training procedure\n\nTrained on a DGX machine (8 V100 GPUs) using the standard huggingface training procedure.\n\nSince the larger part of our training data is based on tweets we decided to start by optimizing using Masked Language Model loss only.\n\nTo optimize training time we split the data into 4 sections based on max number of tokens:\n\n1. num tokens < 32 (70M sentences)\n2. 32 <= num tokens < 64 (12M sentences)\n3. 64 <= num tokens < 128 (10M sentences)\n4. 128 <= num tokens < 512 (1.5M sentences)\n\nEach section was first trained for 5 epochs with an initial learning rate set to 1e-4. Then each section was trained for another 5 epochs with an initial learning rate set to 1e-5, for a total of 10 epochs.\n\nTotal training time was 8 days." ]
[ "TAGS\n#transformers #pytorch #bert #fill-mask #language model #he #dataset-oscar #dataset-wikipedia #dataset-twitter #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# AlephBERT", "## Hebrew Language Model\n\nState-of-the-art language model for Hebrew.\nBased on Google's BERT architecture (Devlin et al. 2018).", "#### How to use", "## Training data\n1. OSCAR (Ortiz, 2019) Hebrew section (10 GB text, 20 million sentences).\n2. Hebrew dump of Wikipedia (650 MB text, 3 million sentences).\n3. Hebrew Tweets collected from the Twitter sample stream (7 GB text, 70 million sentences).", "## Training procedure\n\nTrained on a DGX machine (8 V100 GPUs) using the standard huggingface training procedure.\n\nSince the larger part of our training data is based on tweets we decided to start by optimizing using Masked Language Model loss only.\n\nTo optimize training time we split the data into 4 sections based on max number of tokens:\n\n1. num tokens < 32 (70M sentences)\n2. 32 <= num tokens < 64 (12M sentences)\n3. 64 <= num tokens < 128 (10M sentences)\n4. 128 <= num tokens < 512 (1.5M sentences)\n\nEach section was first trained for 5 epochs with an initial learning rate set to 1e-4. Then each section was trained for another 5 epochs with an initial learning rate set to 1e-5, for a total of 10 epochs.\n\nTotal training time was 8 days." ]
fill-mask
transformers
# Cross-Document Language Modeling CDLM: Cross-Document Language Modeling. Avi Caciularu, Arman Cohan, Iz Beltagy, Matthew E Peters, Arie Cattan and Ido Dagan. In EMNLP Findings, 2021. [PDF](https://arxiv.org/pdf/2101.00406.pdf) Please note that during our pretraining we used the document and sentence separators, which you might want to add to your data. The document and sentence separators are `<doc-s>`, `</doc-s>` (the last two tokens in the vocabulary), and `<s>`, `</s>`, respectively. ```python from transformers import AutoTokenizer, AutoModel # load model and tokenizer tokenizer = AutoTokenizer.from_pretrained('biu-nlp/cdlm') model = AutoModel.from_pretrained('biu-nlp/cdlm') ``` The original repo is [here](https://github.com/aviclu/CDLM). If you find our work useful, please cite the paper as: ```python @article{caciularu2021cross, title={Cross-Document Language Modeling}, author={Caciularu, Avi and Cohan, Arman and Beltagy, Iz and Peters, Matthew E and Cattan, Arie and Dagan, Ido}, journal={Findings of the Association for Computational Linguistics: EMNLP 2021}, year={2021} } ```
{"language": "en", "license": "apache-2.0", "tags": ["longformer", "cdlm"], "inference": false}
biu-nlp/cdlm
null
[ "transformers", "pytorch", "longformer", "fill-mask", "cdlm", "en", "arxiv:2101.00406", "license:apache-2.0", "autotrain_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2101.00406" ]
[ "en" ]
TAGS #transformers #pytorch #longformer #fill-mask #cdlm #en #arxiv-2101.00406 #license-apache-2.0 #autotrain_compatible #region-us
# Cross-Document Language Modeling CDLM: Cross-Document Language Modeling. Avi Caciularu, Arman Cohan, Iz Beltagy, Matthew E Peters, Arie Cattan and Ido Dagan. In EMNLP Findings, 2021. PDF Please note that during our pretraining we used the document and sentence separators, which you might want to add to your data. The document and sentence separators are '<doc-s>', '</doc-s>' (the last two tokens in the vocabulary), and '<s>', '</s>', respectively. The original repo is here. If you find our work useful, please cite the paper as:
[ "# Cross-Document Language Modeling\n\nCDLM: Cross-Document Language Modeling. \nAvi Caciularu, Arman Cohan, Iz Beltagy, Matthew E Peters, Arie Cattan and Ido Dagan. In EMNLP Findings, 2021. PDF\n\n\nPlease note that during our pretraining we used the document and sentence separators, which you might want to add to your data. The document and sentence separators are '<doc-s>', '</doc-s>' (the last two tokens in the vocabulary), and '<s>', '</s>', respectively.\n\n\n\n\nThe original repo is here.\n\nIf you find our work useful, please cite the paper as:" ]
[ "TAGS\n#transformers #pytorch #longformer #fill-mask #cdlm #en #arxiv-2101.00406 #license-apache-2.0 #autotrain_compatible #region-us \n", "# Cross-Document Language Modeling\n\nCDLM: Cross-Document Language Modeling. \nAvi Caciularu, Arman Cohan, Iz Beltagy, Matthew E Peters, Arie Cattan and Ido Dagan. In EMNLP Findings, 2021. PDF\n\n\nPlease note that during our pretraining we used the document and sentence separators, which you might want to add to your data. The document and sentence separators are '<doc-s>', '</doc-s>' (the last two tokens in the vocabulary), and '<s>', '</s>', respectively.\n\n\n\n\nThe original repo is here.\n\nIf you find our work useful, please cite the paper as:" ]
text-classification
transformers
# SuperPAL model Summary-Source Proposition-level Alignment: Task, Datasets and Supervised Baseline Ori Ernst, Ori Shapira, Ramakanth Pasunuru, Michael Lepioshkin, Jacob Goldberger, Mohit Bansal, Ido Dagan, 2021. [PDF](https://arxiv.org/pdf/2009.00590) **How to use?** ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("biu-nlp/superpal") model = AutoModelForSequenceClassification.from_pretrained("biu-nlp/superpal") ``` The original repo is [here](https://github.com/oriern/SuperPAL). If you find our work useful, please cite the paper as: ```python @inproceedings{ernst-etal-2021-summary, title = "Summary-Source Proposition-level Alignment: Task, Datasets and Supervised Baseline", author = "Ernst, Ori and Shapira, Ori and Pasunuru, Ramakanth and Lepioshkin, Michael and Goldberger, Jacob and Bansal, Mohit and Dagan, Ido", booktitle = "Proceedings of the 25th Conference on Computational Natural Language Learning", month = nov, year = "2021", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.conll-1.25", pages = "310--322" } ```
{"widget": [{"text": "Prime Minister Hun Sen insisted that talks take place in Cambodia. </s><s> Cambodian leader Hun Sen rejected opposition parties' demands for talks outside the country."}]}
biu-nlp/superpal
null
[ "transformers", "pytorch", "roberta", "text-classification", "arxiv:2009.00590", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2009.00590" ]
[]
TAGS #transformers #pytorch #roberta #text-classification #arxiv-2009.00590 #autotrain_compatible #endpoints_compatible #region-us
# SuperPAL model Summary-Source Proposition-level Alignment: Task, Datasets and Supervised Baseline Ori Ernst, Ori Shapira, Ramakanth Pasunuru, Michael Lepioshkin, Jacob Goldberger, Mohit Bansal, Ido Dagan, 2021. PDF How to use? The original repo is here. If you find our work useful, please cite the paper as:
[ "# SuperPAL model\n\nSummary-Source Proposition-level Alignment: Task, Datasets and Supervised Baseline\nOri Ernst, Ori Shapira, Ramakanth Pasunuru, Michael Lepioshkin, Jacob Goldberger, Mohit Bansal, Ido Dagan, 2021. PDF\n\nHow to use?\n\n\n\n\n\nThe original repo is here.\n\n\nIf you find our work useful, please cite the paper as:" ]
[ "TAGS\n#transformers #pytorch #roberta #text-classification #arxiv-2009.00590 #autotrain_compatible #endpoints_compatible #region-us \n", "# SuperPAL model\n\nSummary-Source Proposition-level Alignment: Task, Datasets and Supervised Baseline\nOri Ernst, Ori Shapira, Ramakanth Pasunuru, Michael Lepioshkin, Jacob Goldberger, Mohit Bansal, Ido Dagan, 2021. PDF\n\nHow to use?\n\n\n\n\n\nThe original repo is here.\n\n\nIf you find our work useful, please cite the paper as:" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlxlm-finetuned-funsd-test This model is a fine-tuned version of [microsoft/layoutxlm-base](https://huggingface.co/microsoft/layoutxlm-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.13.0.dev0 - Pytorch 1.8.0+cu101 - Datasets 1.15.1 - Tokenizers 0.10.3
{"license": "cc-by-nc-sa-4.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "layoutlxlm-finetuned-funsd-test", "results": []}]}
bjorz/layoutxlm-finetuned-funsd-test
null
[ "transformers", "pytorch", "tensorboard", "layoutlmv2", "token-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #layoutlmv2 #token-classification #generated_from_trainer #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
# layoutlxlm-finetuned-funsd-test This model is a fine-tuned version of microsoft/layoutxlm-base on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.13.0.dev0 - Pytorch 1.8.0+cu101 - Datasets 1.15.1 - Tokenizers 0.10.3
[ "# layoutlxlm-finetuned-funsd-test\n\nThis model is a fine-tuned version of microsoft/layoutxlm-base on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 1000\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.13.0.dev0\n- Pytorch 1.8.0+cu101\n- Datasets 1.15.1\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #layoutlmv2 #token-classification #generated_from_trainer #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# layoutlxlm-finetuned-funsd-test\n\nThis model is a fine-tuned version of microsoft/layoutxlm-base on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 1000\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.13.0.dev0\n- Pytorch 1.8.0+cu101\n- Datasets 1.15.1\n- Tokenizers 0.10.3" ]
image-classification
transformers
# simple_kitchen Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### best kitchen island ![best kitchen island](images/best_kitchen_island.jpg) #### kitchen cabinet ![kitchen cabinet](images/kitchen_cabinet.jpg) #### kitchen countertop ![kitchen countertop](images/kitchen_countertop.jpg)
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]}
black/simple_kitchen
null
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us
# simple_kitchen Autogenerated by HuggingPics️ Create your own image classifier for anything by running the demo on Google Colab. Report any issues with the demo at the github repo. ## Example Images #### best kitchen island !best kitchen island #### kitchen cabinet !kitchen cabinet #### kitchen countertop !kitchen countertop
[ "# simple_kitchen\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.", "## Example Images", "#### best kitchen island\n\n!best kitchen island", "#### kitchen cabinet\n\n!kitchen cabinet", "#### kitchen countertop\n\n!kitchen countertop" ]
[ "TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "# simple_kitchen\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.", "## Example Images", "#### best kitchen island\n\n!best kitchen island", "#### kitchen cabinet\n\n!kitchen cabinet", "#### kitchen countertop\n\n!kitchen countertop" ]
text-classification
transformers
BERT based model finetuned on MNLI with our custom training routine. Yields 60% accuraqcy on adversarial HANS dataset.
{}
blackbird/bert-base-uncased-MNLI-v1
null
[ "transformers", "pytorch", "jax", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #jax #safetensors #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
BERT based model finetuned on MNLI with our custom training routine. Yields 60% accuraqcy on adversarial HANS dataset.
[]
[ "TAGS\n#transformers #pytorch #jax #safetensors #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n" ]
null
null
# TEST # huggingface model
{}
blackface/dummy
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #region-us
# TEST # huggingface model
[ "# TEST", "# huggingface model" ]
[ "TAGS\n#region-us \n", "# TEST", "# huggingface model" ]
text-classification
transformers
# RuBERT for Sentiment Analysis of Medical Reviews This is a [DeepPavlov/rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model trained on corpus of medical reviews. ## Labels 0: NEUTRAL 1: POSITIVE 2: NEGATIVE ## How to use ```python import torch from transformers import AutoModelForSequenceClassification from transformers import BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained('blanchefort/rubert-base-cased-sentiment-med') model = AutoModelForSequenceClassification.from_pretrained('blanchefort/rubert-base-cased-sentiment-med', return_dict=True) @torch.no_grad() def predict(text): inputs = tokenizer(text, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**inputs) predicted = torch.nn.functional.softmax(outputs.logits, dim=1) predicted = torch.argmax(predicted, dim=1).numpy() return predicted ``` ## Dataset used for model training **[Отзывы о медучреждениях](https://github.com/blanchefort/datasets/tree/master/medical_comments)** > Датасет содержит пользовательские отзывы о медицинских учреждениях. Датасет собран в мае 2019 года с сайта prodoctorov.ru
{"language": ["ru"], "tags": ["sentiment", "text-classification"]}
blanchefort/rubert-base-cased-sentiment-med
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "text-classification", "sentiment", "ru", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ru" ]
TAGS #transformers #pytorch #tf #jax #safetensors #bert #text-classification #sentiment #ru #autotrain_compatible #endpoints_compatible #region-us
# RuBERT for Sentiment Analysis of Medical Reviews This is a DeepPavlov/rubert-base-cased-conversational model trained on corpus of medical reviews. ## Labels 0: NEUTRAL 1: POSITIVE 2: NEGATIVE ## How to use ## Dataset used for model training Отзывы о медучреждениях > Датасет содержит пользовательские отзывы о медицинских учреждениях. Датасет собран в мае 2019 года с сайта URL
[ "# RuBERT for Sentiment Analysis of Medical Reviews\n\nThis is a DeepPavlov/rubert-base-cased-conversational model trained on corpus of medical reviews.", "## Labels\n 0: NEUTRAL\n 1: POSITIVE\n 2: NEGATIVE", "## How to use", "## Dataset used for model training\n\nОтзывы о медучреждениях\n\n> Датасет содержит пользовательские отзывы о медицинских учреждениях. Датасет собран в мае 2019 года с сайта URL" ]
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #text-classification #sentiment #ru #autotrain_compatible #endpoints_compatible #region-us \n", "# RuBERT for Sentiment Analysis of Medical Reviews\n\nThis is a DeepPavlov/rubert-base-cased-conversational model trained on corpus of medical reviews.", "## Labels\n 0: NEUTRAL\n 1: POSITIVE\n 2: NEGATIVE", "## How to use", "## Dataset used for model training\n\nОтзывы о медучреждениях\n\n> Датасет содержит пользовательские отзывы о медицинских учреждениях. Датасет собран в мае 2019 года с сайта URL" ]
text-classification
transformers
# RuBERT for Sentiment Analysis of Tweets This is a [DeepPavlov/rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model trained on [RuTweetCorp](https://study.mokoron.com/). ## Labels 0: POSITIVE 1: NEGATIVE ## How to use ```python import torch from transformers import AutoModelForSequenceClassification from transformers import BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained('blanchefort/rubert-base-cased-sentiment-mokoron') model = AutoModelForSequenceClassification.from_pretrained('blanchefort/rubert-base-cased-sentiment-mokoron', return_dict=True) @torch.no_grad() def predict(text): inputs = tokenizer(text, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**inputs) predicted = torch.nn.functional.softmax(outputs.logits, dim=1) predicted = torch.argmax(predicted, dim=1).numpy() return predicted ``` ## Dataset used for model training **[RuTweetCorp](https://study.mokoron.com/)** > Рубцова Ю. Автоматическое построение и анализ корпуса коротких текстов (постов микроблогов) для задачи разработки и тренировки тонового классификатора // Инженерия знаний и технологии семантического веба. – 2012. – Т. 1. – С. 109-116.
{"language": ["ru"], "tags": ["sentiment", "text-classification"], "datasets": ["RuTweetCorp"]}
blanchefort/rubert-base-cased-sentiment-mokoron
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "text-classification", "sentiment", "ru", "dataset:RuTweetCorp", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ru" ]
TAGS #transformers #pytorch #tf #jax #safetensors #bert #text-classification #sentiment #ru #dataset-RuTweetCorp #autotrain_compatible #endpoints_compatible #region-us
# RuBERT for Sentiment Analysis of Tweets This is a DeepPavlov/rubert-base-cased-conversational model trained on RuTweetCorp. ## Labels 0: POSITIVE 1: NEGATIVE ## How to use ## Dataset used for model training RuTweetCorp > Рубцова Ю. Автоматическое построение и анализ корпуса коротких текстов (постов микроблогов) для задачи разработки и тренировки тонового классификатора // Инженерия знаний и технологии семантического веба. – 2012. – Т. 1. – С. 109-116.
[ "# RuBERT for Sentiment Analysis of Tweets\n\nThis is a DeepPavlov/rubert-base-cased-conversational model trained on RuTweetCorp.", "## Labels\n 0: POSITIVE\n 1: NEGATIVE", "## How to use", "## Dataset used for model training\n\nRuTweetCorp\n\n> Рубцова Ю. Автоматическое построение и анализ корпуса коротких текстов (постов микроблогов) для задачи разработки и тренировки тонового классификатора // Инженерия знаний и технологии семантического веба. – 2012. – Т. 1. – С. 109-116." ]
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #text-classification #sentiment #ru #dataset-RuTweetCorp #autotrain_compatible #endpoints_compatible #region-us \n", "# RuBERT for Sentiment Analysis of Tweets\n\nThis is a DeepPavlov/rubert-base-cased-conversational model trained on RuTweetCorp.", "## Labels\n 0: POSITIVE\n 1: NEGATIVE", "## How to use", "## Dataset used for model training\n\nRuTweetCorp\n\n> Рубцова Ю. Автоматическое построение и анализ корпуса коротких текстов (постов микроблогов) для задачи разработки и тренировки тонового классификатора // Инженерия знаний и технологии семантического веба. – 2012. – Т. 1. – С. 109-116." ]
text-classification
transformers
# RuBERT for Sentiment Analysis of Product Reviews This is a [DeepPavlov/rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model trained on [RuReviews](https://github.com/sismetanin/rureviews). ## Labels 0: NEUTRAL 1: POSITIVE 2: NEGATIVE ## How to use ```python import torch from transformers import AutoModelForSequenceClassification from transformers import BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained('blanchefort/rubert-base-cased-sentiment-rurewiews') model = AutoModelForSequenceClassification.from_pretrained('blanchefort/rubert-base-cased-sentiment-rurewiews', return_dict=True) @torch.no_grad() def predict(text): inputs = tokenizer(text, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**inputs) predicted = torch.nn.functional.softmax(outputs.logits, dim=1) predicted = torch.argmax(predicted, dim=1).numpy() return predicted ``` ## Dataset used for model training **[RuReviews](https://github.com/sismetanin/rureviews)** > RuReviews: An Automatically Annotated Sentiment Analysis Dataset for Product Reviews in Russian.
{"language": ["ru"], "tags": ["sentiment", "text-classification"], "datasets": ["RuReviews"]}
blanchefort/rubert-base-cased-sentiment-rurewiews
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "text-classification", "sentiment", "ru", "dataset:RuReviews", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ru" ]
TAGS #transformers #pytorch #tf #jax #safetensors #bert #text-classification #sentiment #ru #dataset-RuReviews #autotrain_compatible #endpoints_compatible #has_space #region-us
# RuBERT for Sentiment Analysis of Product Reviews This is a DeepPavlov/rubert-base-cased-conversational model trained on RuReviews. ## Labels 0: NEUTRAL 1: POSITIVE 2: NEGATIVE ## How to use ## Dataset used for model training RuReviews > RuReviews: An Automatically Annotated Sentiment Analysis Dataset for Product Reviews in Russian.
[ "# RuBERT for Sentiment Analysis of Product Reviews\n\nThis is a DeepPavlov/rubert-base-cased-conversational model trained on RuReviews.", "## Labels\n 0: NEUTRAL\n 1: POSITIVE\n 2: NEGATIVE", "## How to use", "## Dataset used for model training\n\nRuReviews\n\n> RuReviews: An Automatically Annotated Sentiment Analysis Dataset for Product Reviews in Russian." ]
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #text-classification #sentiment #ru #dataset-RuReviews #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# RuBERT for Sentiment Analysis of Product Reviews\n\nThis is a DeepPavlov/rubert-base-cased-conversational model trained on RuReviews.", "## Labels\n 0: NEUTRAL\n 1: POSITIVE\n 2: NEGATIVE", "## How to use", "## Dataset used for model training\n\nRuReviews\n\n> RuReviews: An Automatically Annotated Sentiment Analysis Dataset for Product Reviews in Russian." ]
text-classification
transformers
# RuBERT for Sentiment Analysis This is a [DeepPavlov/rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model trained on [RuSentiment](http://text-machine.cs.uml.edu/projects/rusentiment/). ## Labels 0: NEUTRAL 1: POSITIVE 2: NEGATIVE ## How to use ```python import torch from transformers import AutoModelForSequenceClassification from transformers import BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained('blanchefort/rubert-base-cased-sentiment-rusentiment') model = AutoModelForSequenceClassification.from_pretrained('blanchefort/rubert-base-cased-sentiment-rusentiment', return_dict=True) @torch.no_grad() def predict(text): inputs = tokenizer(text, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**inputs) predicted = torch.nn.functional.softmax(outputs.logits, dim=1) predicted = torch.argmax(predicted, dim=1).numpy() return predicted ``` ## Dataset used for model training **[RuSentiment](http://text-machine.cs.uml.edu/projects/rusentiment/)** > A. Rogers A. Romanov A. Rumshisky S. Volkova M. Gronas A. Gribov RuSentiment: An Enriched Sentiment Analysis Dataset for Social Media in Russian. Proceedings of COLING 2018.
{"language": ["ru"], "tags": ["sentiment", "text-classification"], "datasets": ["RuSentiment"]}
blanchefort/rubert-base-cased-sentiment-rusentiment
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "text-classification", "sentiment", "ru", "dataset:RuSentiment", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ru" ]
TAGS #transformers #pytorch #tf #jax #safetensors #bert #text-classification #sentiment #ru #dataset-RuSentiment #autotrain_compatible #endpoints_compatible #has_space #region-us
# RuBERT for Sentiment Analysis This is a DeepPavlov/rubert-base-cased-conversational model trained on RuSentiment. ## Labels 0: NEUTRAL 1: POSITIVE 2: NEGATIVE ## How to use ## Dataset used for model training RuSentiment > A. Rogers A. Romanov A. Rumshisky S. Volkova M. Gronas A. Gribov RuSentiment: An Enriched Sentiment Analysis Dataset for Social Media in Russian. Proceedings of COLING 2018.
[ "# RuBERT for Sentiment Analysis\n\nThis is a DeepPavlov/rubert-base-cased-conversational model trained on RuSentiment.", "## Labels\n 0: NEUTRAL\n 1: POSITIVE\n 2: NEGATIVE", "## How to use", "## Dataset used for model training\n\nRuSentiment\n\n> A. Rogers A. Romanov A. Rumshisky S. Volkova M. Gronas A. Gribov RuSentiment: An Enriched Sentiment Analysis Dataset for Social Media in Russian. Proceedings of COLING 2018." ]
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #text-classification #sentiment #ru #dataset-RuSentiment #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# RuBERT for Sentiment Analysis\n\nThis is a DeepPavlov/rubert-base-cased-conversational model trained on RuSentiment.", "## Labels\n 0: NEUTRAL\n 1: POSITIVE\n 2: NEGATIVE", "## How to use", "## Dataset used for model training\n\nRuSentiment\n\n> A. Rogers A. Romanov A. Rumshisky S. Volkova M. Gronas A. Gribov RuSentiment: An Enriched Sentiment Analysis Dataset for Social Media in Russian. Proceedings of COLING 2018." ]
text-classification
transformers
# RuBERT for Sentiment Analysis Short Russian texts sentiment classification This is a [DeepPavlov/rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model trained on aggregated corpus of 351.797 texts. ## Labels 0: NEUTRAL 1: POSITIVE 2: NEGATIVE ## How to use ```python import torch from transformers import AutoModelForSequenceClassification from transformers import BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained('blanchefort/rubert-base-cased-sentiment') model = AutoModelForSequenceClassification.from_pretrained('blanchefort/rubert-base-cased-sentiment', return_dict=True) @torch.no_grad() def predict(text): inputs = tokenizer(text, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**inputs) predicted = torch.nn.functional.softmax(outputs.logits, dim=1) predicted = torch.argmax(predicted, dim=1).numpy() return predicted ``` ## Datasets used for model training **[RuTweetCorp](https://study.mokoron.com/)** > Рубцова Ю. Автоматическое построение и анализ корпуса коротких текстов (постов микроблогов) для задачи разработки и тренировки тонового классификатора //Инженерия знаний и технологии семантического веба. – 2012. – Т. 1. – С. 109-116. **[RuReviews](https://github.com/sismetanin/rureviews)** > RuReviews: An Automatically Annotated Sentiment Analysis Dataset for Product Reviews in Russian. **[RuSentiment](http://text-machine.cs.uml.edu/projects/rusentiment/)** > A. Rogers A. Romanov A. Rumshisky S. Volkova M. Gronas A. Gribov RuSentiment: An Enriched Sentiment Analysis Dataset for Social Media in Russian. Proceedings of COLING 2018. **[Отзывы о медучреждениях](https://github.com/blanchefort/datasets/tree/master/medical_comments)** > Датасет содержит пользовательские отзывы о медицинских учреждениях. Датасет собран в мае 2019 года с сайта prodoctorov.ru
{"language": ["ru"], "tags": ["sentiment", "text-classification"]}
blanchefort/rubert-base-cased-sentiment
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "text-classification", "sentiment", "ru", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ru" ]
TAGS #transformers #pytorch #tf #jax #safetensors #bert #text-classification #sentiment #ru #autotrain_compatible #endpoints_compatible #has_space #region-us
# RuBERT for Sentiment Analysis Short Russian texts sentiment classification This is a DeepPavlov/rubert-base-cased-conversational model trained on aggregated corpus of 351.797 texts. ## Labels 0: NEUTRAL 1: POSITIVE 2: NEGATIVE ## How to use ## Datasets used for model training RuTweetCorp > Рубцова Ю. Автоматическое построение и анализ корпуса коротких текстов (постов микроблогов) для задачи разработки и тренировки тонового классификатора //Инженерия знаний и технологии семантического веба. – 2012. – Т. 1. – С. 109-116. RuReviews > RuReviews: An Automatically Annotated Sentiment Analysis Dataset for Product Reviews in Russian. RuSentiment > A. Rogers A. Romanov A. Rumshisky S. Volkova M. Gronas A. Gribov RuSentiment: An Enriched Sentiment Analysis Dataset for Social Media in Russian. Proceedings of COLING 2018. Отзывы о медучреждениях > Датасет содержит пользовательские отзывы о медицинских учреждениях. Датасет собран в мае 2019 года с сайта URL
[ "# RuBERT for Sentiment Analysis\nShort Russian texts sentiment classification\n\nThis is a DeepPavlov/rubert-base-cased-conversational model trained on aggregated corpus of 351.797 texts.", "## Labels\n 0: NEUTRAL\n 1: POSITIVE\n 2: NEGATIVE", "## How to use", "## Datasets used for model training\n\nRuTweetCorp\n\n> Рубцова Ю. Автоматическое построение и анализ корпуса коротких текстов (постов микроблогов) для задачи разработки и тренировки тонового классификатора //Инженерия знаний и технологии семантического веба. – 2012. – Т. 1. – С. 109-116.\n\nRuReviews\n\n> RuReviews: An Automatically Annotated Sentiment Analysis Dataset for Product Reviews in Russian.\n\nRuSentiment\n\n> A. Rogers A. Romanov A. Rumshisky S. Volkova M. Gronas A. Gribov RuSentiment: An Enriched Sentiment Analysis Dataset for Social Media in Russian. Proceedings of COLING 2018.\n\nОтзывы о медучреждениях\n\n> Датасет содержит пользовательские отзывы о медицинских учреждениях. Датасет собран в мае 2019 года с сайта URL" ]
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #text-classification #sentiment #ru #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# RuBERT for Sentiment Analysis\nShort Russian texts sentiment classification\n\nThis is a DeepPavlov/rubert-base-cased-conversational model trained on aggregated corpus of 351.797 texts.", "## Labels\n 0: NEUTRAL\n 1: POSITIVE\n 2: NEGATIVE", "## How to use", "## Datasets used for model training\n\nRuTweetCorp\n\n> Рубцова Ю. Автоматическое построение и анализ корпуса коротких текстов (постов микроблогов) для задачи разработки и тренировки тонового классификатора //Инженерия знаний и технологии семантического веба. – 2012. – Т. 1. – С. 109-116.\n\nRuReviews\n\n> RuReviews: An Automatically Annotated Sentiment Analysis Dataset for Product Reviews in Russian.\n\nRuSentiment\n\n> A. Rogers A. Romanov A. Rumshisky S. Volkova M. Gronas A. Gribov RuSentiment: An Enriched Sentiment Analysis Dataset for Social Media in Russian. Proceedings of COLING 2018.\n\nОтзывы о медучреждениях\n\n> Датасет содержит пользовательские отзывы о медицинских учреждениях. Датасет собран в мае 2019 года с сайта URL" ]
text-generation
transformers
# ss
{"tags": ["conversational"]}
bleachybrain/DialoGPT-med-ss
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# ss
[ "# ss" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# ss" ]
fill-mask
transformers
# RoBERTa-like language model trained on part of part of TAIGA corpus ## Training Details - about 60k steps ![]() ## Example pipeline ```python from transformers import pipeline from transformers import RobertaTokenizerFast tokenizer = RobertaTokenizerFast.from_pretrained('blinoff/roberta-base-russian-v0', max_len=512) fill_mask = pipeline( "fill-mask", model="blinoff/roberta-base-russian-v0", tokenizer=tokenizer ) fill_mask("Мозг — это машина <mask>, которая пытается снизить ошибку в прогнозе.") # { # 'sequence': '<s>Мозг — это машина города, которая пытается снизить ошибку в прогнозе.</s>', # 'score': 0.012859329581260681, # 'token': 2144, # 'token_str': 'ĠгоÑĢода' # }, # { # 'sequence': '<s>Мозг — это машина человека, которая пытается снизить ошибку в прогнозе.</s>', # 'score': 0.01185101643204689, # 'token': 1470, # 'token_str': 'ĠÑĩеловека' # }, # { # 'sequence': '<s>Мозг — это машина дома, которая пытается снизить ошибку в прогнозе.</s>', # 'score': 0.009940559044480324, # 'token': 1411, # 'token_str': 'Ġдома' # }, # { # 'sequence': '<s>Мозг — это машина женщина, которая пытается снизить ошибку в прогнозе.</s>', # 'score': 0.007794599514454603, # 'token': 2707, # 'token_str': 'ĠженÑīина' # }, # { # 'sequence': '<s>Мозг — это машина женщины, которая пытается снизить ошибку в прогнозе.</s>', # 'score': 0.007725382689386606, # 'token': 3546, # 'token_str': 'ĠженÑīинÑĭ' # } ```
{"language": "ru", "widget": [{"text": "\u041c\u043e\u0437\u0433 \u2014 \u044d\u0442\u043e \u043c\u0430\u0448\u0438\u043d\u0430 \u0432\u044b\u0432\u043e\u0434\u0430, \u043a\u043e\u0442\u043e\u0440\u0430\u044f \u043f\u044b\u0442\u0430\u0435\u0442\u0441\u044f <mask> \u043e\u0448\u0438\u0431\u043a\u0443 \u0432 \u043f\u0440\u043e\u0433\u043d\u043e\u0437\u0435.", "example_title": "brain_example"}, {"text": "\u041d\u0438\u043a\u043e\u0433\u0434\u0430 \u043d\u0435 \u0441\u043f\u043e\u0440\u044c\u0442\u0435 \u0441 \u0438\u0434\u0438\u043e\u0442\u0430\u043c\u0438, <mask> \u043e\u043f\u0443\u0441\u0442\u0438\u0442\u0435\u0441\u044c \u0434\u043e \u0438\u0445 \u0443\u0440\u043e\u0432\u043d\u044f, \u0433\u0434\u0435 \u043e\u043d\u0438 \u0432\u0430\u0441 \u0437\u0430\u0434\u0430\u0432\u044f\u0442 \u0441\u0432\u043e\u0438\u043c \u043e\u043f\u044b\u0442\u043e\u043c.", "example_title": "idiot_example"}]}
blinoff/roberta-base-russian-v0
null
[ "transformers", "pytorch", "jax", "safetensors", "roberta", "fill-mask", "ru", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ru" ]
TAGS #transformers #pytorch #jax #safetensors #roberta #fill-mask #ru #autotrain_compatible #endpoints_compatible #region-us
# RoBERTa-like language model trained on part of part of TAIGA corpus ## Training Details - about 60k steps ![]() ## Example pipeline
[ "# RoBERTa-like language model trained on part of part of TAIGA corpus", "## Training Details\n\n- about 60k steps\n\n![]()", "## Example pipeline" ]
[ "TAGS\n#transformers #pytorch #jax #safetensors #roberta #fill-mask #ru #autotrain_compatible #endpoints_compatible #region-us \n", "# RoBERTa-like language model trained on part of part of TAIGA corpus", "## Training Details\n\n- about 60k steps\n\n![]()", "## Example pipeline" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa-1 This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6660 - Accuracy: 0.7 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 57 | 0.8471 | 0.58 | | No log | 2.0 | 114 | 0.8450 | 0.58 | | No log | 3.0 | 171 | 0.7846 | 0.58 | | No log | 4.0 | 228 | 0.8649 | 0.58 | | No log | 5.0 | 285 | 0.7220 | 0.68 | | No log | 6.0 | 342 | 0.7395 | 0.66 | | No log | 7.0 | 399 | 0.7198 | 0.72 | | No log | 8.0 | 456 | 0.6417 | 0.72 | | 0.7082 | 9.0 | 513 | 0.6265 | 0.74 | | 0.7082 | 10.0 | 570 | 0.6660 | 0.7 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.0 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": [], "metrics": ["accuracy"]}
blizrys/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa-1
null
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa-1 ======================================================================== This model is a fine-tuned version of microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.6660 * Accuracy: 0.7 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.10.2 * Pytorch 1.9.0+cu102 * Datasets 1.12.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa-2 This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0005 - Accuracy: 0.54 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 57 | 1.3510 | 0.54 | | No log | 2.0 | 114 | 0.9606 | 0.54 | | No log | 3.0 | 171 | 0.9693 | 0.54 | | No log | 4.0 | 228 | 1.0445 | 0.54 | | No log | 5.0 | 285 | 1.0005 | 0.54 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": [], "metrics": ["accuracy"]}
blizrys/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa-2
null
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa-2 ======================================================================== This model is a fine-tuned version of microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.0005 * Accuracy: 0.54 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.003 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.10.2 * Pytorch 1.9.0+cu102 * Datasets 1.12.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6748 - Accuracy: 0.72 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 57 | 0.8396 | 0.58 | | No log | 2.0 | 114 | 0.8608 | 0.58 | | No log | 3.0 | 171 | 0.7642 | 0.68 | | No log | 4.0 | 228 | 0.8196 | 0.64 | | No log | 5.0 | 285 | 0.6477 | 0.72 | | No log | 6.0 | 342 | 0.6861 | 0.72 | | No log | 7.0 | 399 | 0.6735 | 0.74 | | No log | 8.0 | 456 | 0.6516 | 0.72 | | 0.6526 | 9.0 | 513 | 0.6707 | 0.72 | | 0.6526 | 10.0 | 570 | 0.6748 | 0.72 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.0 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": [], "metrics": ["accuracy"]}
blizrys/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa
null
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa ====================================================================== This model is a fine-tuned version of microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.6748 * Accuracy: 0.72 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.10.2 * Pytorch 1.9.0+cu102 * Datasets 1.12.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # biobert-base-cased-v1.1-finetuned-pubmedqa This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.1](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3182 - Accuracy: 0.5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 57 | 0.8591 | 0.58 | | No log | 2.0 | 114 | 0.9120 | 0.58 | | No log | 3.0 | 171 | 0.8159 | 0.62 | | No log | 4.0 | 228 | 1.1651 | 0.54 | | No log | 5.0 | 285 | 1.2350 | 0.6 | | No log | 6.0 | 342 | 1.5563 | 0.68 | | No log | 7.0 | 399 | 2.0233 | 0.58 | | No log | 8.0 | 456 | 2.2054 | 0.5 | | 0.4463 | 9.0 | 513 | 2.2434 | 0.5 | | 0.4463 | 10.0 | 570 | 2.3182 | 0.5 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "datasets": [], "metrics": ["accuracy"]}
blizrys/biobert-base-cased-v1.1-finetuned-pubmedqa
null
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #model-index #autotrain_compatible #endpoints_compatible #region-us
biobert-base-cased-v1.1-finetuned-pubmedqa ========================================== This model is a fine-tuned version of dmis-lab/biobert-base-cased-v1.1 on the None dataset. It achieves the following results on the evaluation set: * Loss: 2.3182 * Accuracy: 0.5 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.10.2 * Pytorch 1.9.0+cu102 * Datasets 1.11.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # biobert-v1.1-finetuned-pubmedqa-adapter This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0910 - Accuracy: 0.48 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 57 | 0.9848 | 0.58 | | No log | 2.0 | 114 | 0.8537 | 0.58 | | No log | 3.0 | 171 | 0.9565 | 0.42 | | No log | 4.0 | 228 | 0.9659 | 0.56 | | No log | 5.0 | 285 | 0.9763 | 0.6 | | No log | 6.0 | 342 | 1.0647 | 0.66 | | No log | 7.0 | 399 | 1.4305 | 0.6 | | No log | 8.0 | 456 | 2.0545 | 0.56 | | 0.6957 | 9.0 | 513 | 2.2438 | 0.5 | | 0.6957 | 10.0 | 570 | 2.0910 | 0.48 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "datasets": [], "metrics": ["accuracy"], "model_index": [{"name": "biobert-v1.1-finetuned-pubmedqa-adapter", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.48}}]}]}
blizrys/biobert-v1.1-finetuned-pubmedqa-adapter
null
[ "tensorboard", "generated_from_trainer", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #tensorboard #generated_from_trainer #region-us
biobert-v1.1-finetuned-pubmedqa-adapter ======================================= This model is a fine-tuned version of dmis-lab/biobert-v1.1 on the None dataset. It achieves the following results on the evaluation set: * Loss: 2.0910 * Accuracy: 0.48 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.003 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.8.2 * Pytorch 1.9.0+cu102 * Datasets 1.11.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#tensorboard #generated_from_trainer #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # biobert-v1.1-finetuned-pubmedqa This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7737 - Accuracy: 0.7 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 57 | 0.8810 | 0.56 | | No log | 2.0 | 114 | 0.8139 | 0.62 | | No log | 3.0 | 171 | 0.7963 | 0.68 | | No log | 4.0 | 228 | 0.7709 | 0.66 | | No log | 5.0 | 285 | 0.7931 | 0.64 | | No log | 6.0 | 342 | 0.7420 | 0.7 | | No log | 7.0 | 399 | 0.7654 | 0.7 | | No log | 8.0 | 456 | 0.7756 | 0.68 | | 0.5849 | 9.0 | 513 | 0.7605 | 0.68 | | 0.5849 | 10.0 | 570 | 0.7737 | 0.7 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "datasets": [], "metrics": ["accuracy"]}
blizrys/biobert-v1.1-finetuned-pubmedqa
null
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #model-index #autotrain_compatible #endpoints_compatible #region-us
biobert-v1.1-finetuned-pubmedqa =============================== This model is a fine-tuned version of dmis-lab/biobert-v1.1 on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.7737 * Accuracy: 0.7 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.10.2 * Pytorch 1.9.0+cu102 * Datasets 1.11.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6223 - Matthews Correlation: 0.5374 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5275 | 1.0 | 535 | 0.5456 | 0.3973 | | 0.3481 | 2.0 | 1070 | 0.5401 | 0.5006 | | 0.242 | 3.0 | 1605 | 0.6223 | 0.5374 | | 0.1725 | 4.0 | 2140 | 0.7934 | 0.5229 | | 0.1346 | 5.0 | 2675 | 0.8478 | 0.5367 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5373623427702773, "name": "Matthews Correlation"}]}]}]}
blizrys/distilbert-base-uncased-finetuned-cola
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-cola ====================================== This model is a fine-tuned version of distilbert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.6223 * Matthews Correlation: 0.5374 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.10.2 * Pytorch 1.9.0+cu102 * Datasets 1.11.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-mnli This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6753 - Accuracy: 0.8206 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 0.5146 | 1.0 | 24544 | 0.4925 | 0.8049 | | 0.4093 | 2.0 | 49088 | 0.5090 | 0.8164 | | 0.3122 | 3.0 | 73632 | 0.5299 | 0.8185 | | 0.2286 | 4.0 | 98176 | 0.6753 | 0.8206 | | 0.182 | 5.0 | 122720 | 0.8372 | 0.8195 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-mnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "mnli"}, "metrics": [{"type": "accuracy", "value": 0.8205807437595517, "name": "Accuracy"}]}]}]}
blizrys/distilbert-base-uncased-finetuned-mnli
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-mnli ====================================== This model is a fine-tuned version of distilbert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.6753 * Accuracy: 0.8206 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.10.2 * Pytorch 1.9.0+cu102 * Datasets 1.11.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
null
transformers
# Keyphrase Boundary Infilling with Replacement (KBIR) The KBIR model as described in "Learning Rich Representations of Keyphrases from Text" from Findings of NAACL 2022 (https://aclanthology.org/2022.findings-naacl.67.pdf) builds on top of the RoBERTa architecture by adding an Infilling head and a Replacement Classification head that is used during pre-training. However, these heads are not used during the downstream evaluation of the model and we only leverage the pre-trained embeddings. Discarding the heads thereby allows us to be compatible with all AutoModel classes that RoBERTa supports. We provide examples on how to perform downstream evaluation on some of the tasks reported in the paper. ## Downstream Evaluation ### Keyphrase Extraction ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("bloomberg/KBIR") model = AutoModelForTokenClassification.from_pretrained("bloomberg/KBIR") from datasets import load_dataset dataset = load_dataset("midas/semeval2017_ke_tagged") ``` Reported Results: | Model | Inspec | SE10 | SE17 | |-----------------------|--------|-------|-------| | RoBERTa+BiLSTM-CRF | 59.5 | 27.8 | 50.8 | | RoBERTa+TG-CRF | 60.4 | 29.7 | 52.1 | | SciBERT+Hypernet-CRF | 62.1 | 36.7 | 54.4 | | RoBERTa+Hypernet-CRF | 62.3 | 34.8 | 53.3 | | RoBERTa-extended-CRF* | 62.09 | 40.61 | 52.32 | | KBI-CRF* | 62.61 | 40.81 | 59.7 | | KBIR-CRF* | 62.72 | 40.15 | 62.56 | ### Named Entity Recognition ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("bloomberg/KBIR") model = AutoModelForTokenClassification.from_pretrained("bloomberg/KBIR") from datasets import load_dataset dataset = load_dataset("conll2003") ``` Reported Results: | Model | F1 | |---------------------------------|-------| | LSTM-CRF (Lample et al., 2016) | 91.0 | | ELMo (Peters et al., 2018) | 92.2 | | BERT (Devlin et al., 2018) | 92.8 | | (Akbik et al., 2019) | 93.1 | | (Baevski et al., 2019) | 93.5 | | LUKE (Yamada et al., 2020) | 94.3 | | LUKE w/o entity attention | 94.1 | | RoBERTa (Yamada et al., 2020) | 92.4 | | RoBERTa-extended* | 92.54 | | KBI* | 92.73 | | KBIR* | 92.97 | ### Question Answering ``` from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("bloomberg/KBIR") model = AutoModelForQuestionAnswering.from_pretrained("bloomberg/KBIR") from datasets import load_dataset dataset = load_dataset("squad") ``` Reported Results: | Model | EM | F1 | |------------------------|-------|-------| | BERT | 84.2 | 91.1 | | XLNet | 89.0 | 94.5 | | ALBERT | 89.3 | 94.8 | | LUKE | 89.8 | 95.0 | | LUKE w/o entity attention | 89.2 | 94.7 | | RoBERTa | 88.9 | 94.6 | | RoBERTa-extended* | 88.88 | 94.55 | | KBI* | 88.97 | 94.7 | | KBIR* | 89.04 | 94.75 | ## Any other classification task As mentioned above since KBIR is built on top of the RoBERTa architecture, it is compatible with any AutoModel setting that RoBERTa is also compatible with. We encourage you to try fine-tuning KBIR on different datasets and report the downstream results. ## Citation Please cite this work using the following BibTeX entry: ``` @inproceedings{kulkarni-etal-2022-learning, title = "Learning Rich Representation of Keyphrases from Text", author = "Kulkarni, Mayank and Mahata, Debanjan and Arora, Ravneet and Bhowmik, Rajarshi", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.findings-naacl.67", doi = "10.18653/v1/2022.findings-naacl.67", pages = "891--906", abstract = "In this work, we explore how to train task-specific language models aimed towards learning rich representation of keyphrases from text documents. We experiment with different masking strategies for pre-training transformer language models (LMs) in discriminative as well as generative settings. In the discriminative setting, we introduce a new pre-training objective - Keyphrase Boundary Infilling with Replacement (KBIR), showing large gains in performance (upto 8.16 points in F1) over SOTA, when the LM pre-trained using KBIR is fine-tuned for the task of keyphrase extraction. In the generative setting, we introduce a new pre-training setup for BART - KeyBART, that reproduces the keyphrases related to the input text in the CatSeq format, instead of the denoised original input. This also led to gains in performance (upto 4.33 points in F1@M) over SOTA for keyphrase generation. Additionally, we also fine-tune the pre-trained language models on named entity recognition (NER), question answering (QA), relation extraction (RE), abstractive summarization and achieve comparable performance with that of the SOTA, showing that learning rich representation of keyphrases is indeed beneficial for many other fundamental NLP tasks.", } ``` ## Contact For any questions contact [email protected]
{"license": "apache-2.0"}
bloomberg/KBIR
null
[ "transformers", "pytorch", "roberta", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #roberta #license-apache-2.0 #endpoints_compatible #has_space #region-us
Keyphrase Boundary Infilling with Replacement (KBIR) ==================================================== The KBIR model as described in "Learning Rich Representations of Keyphrases from Text" from Findings of NAACL 2022 (URL builds on top of the RoBERTa architecture by adding an Infilling head and a Replacement Classification head that is used during pre-training. However, these heads are not used during the downstream evaluation of the model and we only leverage the pre-trained embeddings. Discarding the heads thereby allows us to be compatible with all AutoModel classes that RoBERTa supports. We provide examples on how to perform downstream evaluation on some of the tasks reported in the paper. Downstream Evaluation --------------------- ### Keyphrase Extraction Reported Results: ### Named Entity Recognition Reported Results: ### Question Answering Reported Results: Model: BERT, EM: 84.2, F1: 91.1 Model: XLNet, EM: 89.0, F1: 94.5 Model: ALBERT, EM: 89.3, F1: 94.8 Model: LUKE, EM: 89.8, F1: 95.0 Model: LUKE w/o entity attention, EM: 89.2, F1: 94.7 Model: RoBERTa, EM: 88.9, F1: 94.6 Model: RoBERTa-extended\*, EM: 88.88, F1: 94.55 Model: KBI\*, EM: 88.97, F1: 94.7 Model: KBIR\*, EM: 89.04, F1: 94.75 Any other classification task ----------------------------- As mentioned above since KBIR is built on top of the RoBERTa architecture, it is compatible with any AutoModel setting that RoBERTa is also compatible with. We encourage you to try fine-tuning KBIR on different datasets and report the downstream results. Please cite this work using the following BibTeX entry: Contact ------- For any questions contact dmahata@URL
[ "### Keyphrase Extraction\n\n\nReported Results:", "### Named Entity Recognition\n\n\nReported Results:", "### Question Answering\n\n\nReported Results:\n\n\nModel: BERT, EM: 84.2, F1: 91.1\nModel: XLNet, EM: 89.0, F1: 94.5\nModel: ALBERT, EM: 89.3, F1: 94.8\nModel: LUKE, EM: 89.8, F1: 95.0\nModel: LUKE w/o entity attention, EM: 89.2, F1: 94.7\nModel: RoBERTa, EM: 88.9, F1: 94.6\nModel: RoBERTa-extended\\*, EM: 88.88, F1: 94.55\nModel: KBI\\*, EM: 88.97, F1: 94.7\nModel: KBIR\\*, EM: 89.04, F1: 94.75\n\n\nAny other classification task\n-----------------------------\n\n\nAs mentioned above since KBIR is built on top of the RoBERTa architecture, it is compatible with any AutoModel setting that RoBERTa is also compatible with.\n\n\nWe encourage you to try fine-tuning KBIR on different datasets and report the downstream results.\n\n\nPlease cite this work using the following BibTeX entry:\n\n\nContact\n-------\n\n\nFor any questions contact dmahata@URL" ]
[ "TAGS\n#transformers #pytorch #roberta #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "### Keyphrase Extraction\n\n\nReported Results:", "### Named Entity Recognition\n\n\nReported Results:", "### Question Answering\n\n\nReported Results:\n\n\nModel: BERT, EM: 84.2, F1: 91.1\nModel: XLNet, EM: 89.0, F1: 94.5\nModel: ALBERT, EM: 89.3, F1: 94.8\nModel: LUKE, EM: 89.8, F1: 95.0\nModel: LUKE w/o entity attention, EM: 89.2, F1: 94.7\nModel: RoBERTa, EM: 88.9, F1: 94.6\nModel: RoBERTa-extended\\*, EM: 88.88, F1: 94.55\nModel: KBI\\*, EM: 88.97, F1: 94.7\nModel: KBIR\\*, EM: 89.04, F1: 94.75\n\n\nAny other classification task\n-----------------------------\n\n\nAs mentioned above since KBIR is built on top of the RoBERTa architecture, it is compatible with any AutoModel setting that RoBERTa is also compatible with.\n\n\nWe encourage you to try fine-tuning KBIR on different datasets and report the downstream results.\n\n\nPlease cite this work using the following BibTeX entry:\n\n\nContact\n-------\n\n\nFor any questions contact dmahata@URL" ]
text2text-generation
transformers
# KeyBART KeyBART as described in "Learning Rich Representations of Keyphrase from Text" published in the Findings of NAACL 2022 (https://aclanthology.org/2022.findings-naacl.67.pdf), pre-trains a BART-based architecture to produce a concatenated sequence of keyphrases in the CatSeqD format. We provide some examples on Downstream Evaluations setups and and also how it can be used for Text-to-Text Generation in a zero-shot setting. ## Downstream Evaluation ### Keyphrase Generation ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("bloomberg/KeyBART") model = AutoModelForSeq2SeqLM.from_pretrained("bloomberg/KeyBART") from datasets import load_dataset dataset = load_dataset("midas/kp20k") ``` Reported Results: #### Present Keyphrase Generation | | Inspec | | NUS | | Krapivin | | SemEval | | KP20k | | |---------------|--------|-------|-------|-------|----------|-------|---------|-------|-------|-------| | Model | F1@5 | F1@M | F1@5 | F1@M | F1@5 | F1@M | F1@5 | F1@M | F1@5 | F1@M | | catSeq | 22.5 | 26.2 | 32.3 | 39.7 | 26.9 | 35.4 | 24.2 | 28.3 | 29.1 | 36.7 | | catSeqTG | 22.9 | 27 | 32.5 | 39.3 | 28.2 | 36.6 | 24.6 | 29.0 | 29.2 | 36.6 | | catSeqTG-2RF1 | 25.3 | 30.1 | 37.5 | 43.3 | 30 | 36.9 | 28.7 | 32.9 | 32.1 | 38.6 | | GANMR | 25.8 | 29.9 | 34.8 | 41.7 | 28.8 | 36.9 | N/A | N/A | 30.3 | 37.8 | | ExHiRD-h | 25.3 | 29.1 | N/A | N/A | 28.6 | 34.7 | 28.4 | 33.5 | 31.1 | 37.4 | | Transformer (Ye et al., 2021) | 28.15 | 32.56 | 37.07 | 41.91 | 31.58 | 36.55 | 28.71 | 32.52 | 33.21 | 37.71 | | BART* | 23.59 | 28.46 | 35.00 | 42.65 | 26.91 | 35.37 | 26.72 | 31.91 | 29.25 | 37.51 | | KeyBART-DOC* | 24.42 | 29.57 | 31.37 | 39.24 | 24.21 | 32.60 | 24.69 | 30.50 | 28.82 | 37.59 | | KeyBART* | 24.49 | 29.69 | 34.77 | 43.57 | 29.24 | 38.62 | 27.47 | 33.54 | 30.71 | 39.76 | | KeyBART* (Zero-shot) | 30.72 | 36.89 | 18.86 | 21.67 | 18.35 | 20.46 | 20.25 | 25.82 | 12.57 | 15.41 | #### Absent Keyphrase Generation | | Inspec | | NUS | | Krapivin | | SemEval | | KP20k | | |---------------|--------|------|------|------|----------|------|---------|------|-------|------| | Model | F1@5 | F1@M | F1@5 | F1@M | F1@5 | F1@M | F1@5 | F1@M | F1@5 | F1@M | | catSeq | 0.4 | 0.8 | 1.6 | 2.8 | 1.8 | 3.6 | 1.6 | 2.8 | 1.5 | 3.2 | | catSeqTG | 0.5 | 1.1 | 1.1 | 1.8 | 1.8 | 3.4 | 1.1 | 1.8 | 1.5 | 3.2 | | catSeqTG-2RF1 | 1.2 | 2.1 | 1.9 | 3.1 | 3.0 | 5.3 | 2.1 | 3.0 | 2.7 | 5.0 | | GANMR | 1.3 | 1.9 | 2.6 | 3.8 | 4.2 | 5.7 | N/A | N/A | 3.2 | 4.5 | | ExHiRD-h | 1.1 | 2.2 | N/A | N/A | 2.2 | 4.3 | 1.7 | 2.5 | 1.6 | 3.2 | | Transformer (Ye et al., 2021) | 1.02 | 1.94 | 2.82 | 4.82 | 3.21 | 6.04 | 2.05 | 2.33 | 2.31 | 4.61 | | BART* | 1.08 | 1.96 | 1.80 | 2.75 | 2.59 | 4.91 | 1.34 | 1.75 | 1.77 | 3.56 | | KeyBART-DOC* | 0.99 | 2.03 | 1.39 | 2.74 | 2.40 | 4.58 | 1.07 | 1.39 | 1.69 | 3.38 | | KeyBART* | 0.95 | 1.81 | 1.23 | 1.90 | 3.09 | 6.08 | 1.96 | 2.65 | 2.03 | 4.26 | | KeyBART* (Zero-shot) | 1.83 | 2.92 | 1.46 | 2.19 | 1.29 | 2.09 | 1.12 | 1.45 | 0.70 | 1.14 | ### Abstractive Summarization ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("bloomberg/KeyBART") model = AutoModelForSeq2SeqLM.from_pretrained("bloomberg/KeyBART") from datasets import load_dataset dataset = load_dataset("cnn_dailymail") ``` Reported Results: | Model | R1 | R2 | RL | |--------------|-------|-------|-------| | BART (Lewis et al., 2019) | 44.16 | 21.28 | 40.9 | | BART* | 42.93 | 20.12 | 39.72 | | KeyBART-DOC* | 42.92 | 20.07 | 39.69 | | KeyBART* | 43.10 | 20.26 | 39.90 | ## Zero-shot settings ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("bloomberg/KeyBART") model = AutoModelForSeq2SeqLM.from_pretrained("bloomberg/KeyBART") ``` Alternatively use the Hosted Inference API console provided in https://huggingface.co/bloomberg/KeyBART Sample Zero Shot result: ``` Input: In this work, we explore how to learn task specific language models aimed towards learning rich representation of keyphrases from text documents. We experiment with different masking strategies for pre-training transformer language models (LMs) in discriminative as well as generative settings. In the discriminative setting, we introduce a new pre-training objective - Keyphrase Boundary Infilling with Replacement (KBIR), showing large gains in performance (upto 9.26 points in F1) over SOTA, when LM pre-trained using KBIR is fine-tuned for the task of keyphrase extraction. In the generative setting, we introduce a new pre-training setup for BART - KeyBART, that reproduces the keyphrases related to the input text in the CatSeq format, instead of the denoised original input. This also led to gains in performance (upto 4.33 points in F1@M) over SOTA for keyphrase generation. Additionally, we also fine-tune the pre-trained language models on named entity recognition (NER), question answering (QA), relation extraction (RE), abstractive summarization and achieve comparable performance with that of the SOTA, showing that learning rich representation of keyphrases is indeed beneficial for many other fundamental NLP tasks. Output: language model;keyphrase generation;new pre-training objective;pre-training setup; ``` ## Citation Please cite this work using the following BibTeX entry: ``` @inproceedings{kulkarni-etal-2022-learning, title = "Learning Rich Representation of Keyphrases from Text", author = "Kulkarni, Mayank and Mahata, Debanjan and Arora, Ravneet and Bhowmik, Rajarshi", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.findings-naacl.67", doi = "10.18653/v1/2022.findings-naacl.67", pages = "891--906", abstract = "In this work, we explore how to train task-specific language models aimed towards learning rich representation of keyphrases from text documents. We experiment with different masking strategies for pre-training transformer language models (LMs) in discriminative as well as generative settings. In the discriminative setting, we introduce a new pre-training objective - Keyphrase Boundary Infilling with Replacement (KBIR), showing large gains in performance (upto 8.16 points in F1) over SOTA, when the LM pre-trained using KBIR is fine-tuned for the task of keyphrase extraction. In the generative setting, we introduce a new pre-training setup for BART - KeyBART, that reproduces the keyphrases related to the input text in the CatSeq format, instead of the denoised original input. This also led to gains in performance (upto 4.33 points in F1@M) over SOTA for keyphrase generation. Additionally, we also fine-tune the pre-trained language models on named entity recognition (NER), question answering (QA), relation extraction (RE), abstractive summarization and achieve comparable performance with that of the SOTA, showing that learning rich representation of keyphrases is indeed beneficial for many other fundamental NLP tasks.", } ``` Please direct all questions to [email protected]
{"license": "apache-2.0"}
bloomberg/KeyBART
null
[ "transformers", "pytorch", "bart", "text2text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bart #text2text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
KeyBART ======= KeyBART as described in "Learning Rich Representations of Keyphrase from Text" published in the Findings of NAACL 2022 (URL pre-trains a BART-based architecture to produce a concatenated sequence of keyphrases in the CatSeqD format. We provide some examples on Downstream Evaluations setups and and also how it can be used for Text-to-Text Generation in a zero-shot setting. Downstream Evaluation --------------------- ### Keyphrase Generation Reported Results: #### Present Keyphrase Generation #### Absent Keyphrase Generation ### Abstractive Summarization Reported Results: Zero-shot settings ------------------ Alternatively use the Hosted Inference API console provided in URL Sample Zero Shot result: Please cite this work using the following BibTeX entry: Please direct all questions to dmahata@URL
[ "### Keyphrase Generation\n\n\nReported Results:", "#### Present Keyphrase Generation", "#### Absent Keyphrase Generation", "### Abstractive Summarization\n\n\nReported Results:\n\n\n\nZero-shot settings\n------------------\n\n\nAlternatively use the Hosted Inference API console provided in URL\n\n\nSample Zero Shot result:\n\n\nPlease cite this work using the following BibTeX entry:\n\n\nPlease direct all questions to dmahata@URL" ]
[ "TAGS\n#transformers #pytorch #bart #text2text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### Keyphrase Generation\n\n\nReported Results:", "#### Present Keyphrase Generation", "#### Absent Keyphrase Generation", "### Abstractive Summarization\n\n\nReported Results:\n\n\n\nZero-shot settings\n------------------\n\n\nAlternatively use the Hosted Inference API console provided in URL\n\n\nSample Zero Shot result:\n\n\nPlease cite this work using the following BibTeX entry:\n\n\nPlease direct all questions to dmahata@URL" ]
null
null
# `paper-rec` Model Card Last updated: 2022-02-04 ## Model Details `paper-rec` goal is to recommend users what scientific papers to read next based on their preferences. This is a test model used to explore Hugging Face Hub capabilities and identify requirements to enable support for recommendation task in the ecosystem. ### Model date 2022-02-04 ### Model type Recommender System model with support of a Language Model for feature extraction. ### Paper & samples The overall idea for `paper-rec` test model is inspired by this work: [NU:BRIEF – A Privacy-aware Newsletter Personalization Engine for Publishers](https://arxiv.org/abs/2109.03955). However, for `paper-rec`, we use a different language model more suitable for longer text, namely *Sentence Transformers*: [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084), in particular: [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). ## Model Use The intended direct users are recommender systems' practitioners and enthusiasts that would like to experiment with the task of scientific paper recommendation. ## Data, Performance, and Limitations ### Data The data used for this model corresponds to the [RSS news feeds for arXiv updates](https://arxiv.org/help/rss) accessed on 2022-02-04. In particular to the ones related to Machine Learning and AI: 1. [Artificial Intelligence](http://arxiv.org/rss/cs.AI) 1. [Computation and Language](http://arxiv.org/rss/cs.CL) 1. [Computer Vision and Pattern Recognition](http://arxiv.org/rss/cs.CV) 1. [Information Retrieval](http://arxiv.org/rss/cs.IR) 1. [Machine Learning (cs)](http://arxiv.org/rss/cs.LG) 1. [Machine Learning (stat)](http://arxiv.org/rss/stat.ML) ### Performance N/A ## Limitations The model is limited to the papers fetched on 2022-02-04, that is, those papers are the only ones it can recommend.
{"language": ["en"], "license": "mit", "tags": ["recsys", "pytorch", "sentence_transformers"]}
bluebalam/paper-rec
null
[ "recsys", "pytorch", "sentence_transformers", "en", "arxiv:2109.03955", "arxiv:1908.10084", "license:mit", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2109.03955", "1908.10084" ]
[ "en" ]
TAGS #recsys #pytorch #sentence_transformers #en #arxiv-2109.03955 #arxiv-1908.10084 #license-mit #region-us
# 'paper-rec' Model Card Last updated: 2022-02-04 ## Model Details 'paper-rec' goal is to recommend users what scientific papers to read next based on their preferences. This is a test model used to explore Hugging Face Hub capabilities and identify requirements to enable support for recommendation task in the ecosystem. ### Model date 2022-02-04 ### Model type Recommender System model with support of a Language Model for feature extraction. ### Paper & samples The overall idea for 'paper-rec' test model is inspired by this work: NU:BRIEF – A Privacy-aware Newsletter Personalization Engine for Publishers. However, for 'paper-rec', we use a different language model more suitable for longer text, namely *Sentence Transformers*: Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks, in particular: sentence-transformers/all-MiniLM-L6-v2. ## Model Use The intended direct users are recommender systems' practitioners and enthusiasts that would like to experiment with the task of scientific paper recommendation. ## Data, Performance, and Limitations ### Data The data used for this model corresponds to the RSS news feeds for arXiv updates accessed on 2022-02-04. In particular to the ones related to Machine Learning and AI: 1. Artificial Intelligence 1. Computation and Language 1. Computer Vision and Pattern Recognition 1. Information Retrieval 1. Machine Learning (cs) 1. Machine Learning (stat) ### Performance N/A ## Limitations The model is limited to the papers fetched on 2022-02-04, that is, those papers are the only ones it can recommend.
[ "# 'paper-rec' Model Card\r\n\r\nLast updated: 2022-02-04", "## Model Details\r\n'paper-rec' goal is to recommend users what scientific papers to read next based on their preferences. This is a test model used to explore Hugging Face Hub capabilities and identify requirements to enable support for recommendation task in the ecosystem.", "### Model date\r\n2022-02-04", "### Model type\r\nRecommender System model with support of a Language Model for feature extraction.", "### Paper & samples\r\nThe overall idea for 'paper-rec' test model is inspired by this work: NU:BRIEF – A Privacy-aware Newsletter Personalization Engine for Publishers.\r\n\r\nHowever, for 'paper-rec', we use a different language model more suitable for longer text, namely *Sentence Transformers*: Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks, in particular: sentence-transformers/all-MiniLM-L6-v2.", "## Model Use\r\nThe intended direct users are recommender systems' practitioners and enthusiasts that would like to experiment with the task of scientific paper recommendation.", "## Data, Performance, and Limitations", "### Data \r\nThe data used for this model corresponds to the RSS news feeds for arXiv updates accessed on 2022-02-04. In particular to the ones related to Machine Learning and AI:\r\n\r\n1. Artificial Intelligence\r\n1. Computation and Language\r\n1. Computer Vision and Pattern Recognition\r\n1. Information Retrieval\r\n1. Machine Learning (cs)\r\n1. Machine Learning (stat)", "### Performance \r\nN/A", "## Limitations\r\nThe model is limited to the papers fetched on 2022-02-04, that is, those papers are the only ones it can recommend." ]
[ "TAGS\n#recsys #pytorch #sentence_transformers #en #arxiv-2109.03955 #arxiv-1908.10084 #license-mit #region-us \n", "# 'paper-rec' Model Card\r\n\r\nLast updated: 2022-02-04", "## Model Details\r\n'paper-rec' goal is to recommend users what scientific papers to read next based on their preferences. This is a test model used to explore Hugging Face Hub capabilities and identify requirements to enable support for recommendation task in the ecosystem.", "### Model date\r\n2022-02-04", "### Model type\r\nRecommender System model with support of a Language Model for feature extraction.", "### Paper & samples\r\nThe overall idea for 'paper-rec' test model is inspired by this work: NU:BRIEF – A Privacy-aware Newsletter Personalization Engine for Publishers.\r\n\r\nHowever, for 'paper-rec', we use a different language model more suitable for longer text, namely *Sentence Transformers*: Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks, in particular: sentence-transformers/all-MiniLM-L6-v2.", "## Model Use\r\nThe intended direct users are recommender systems' practitioners and enthusiasts that would like to experiment with the task of scientific paper recommendation.", "## Data, Performance, and Limitations", "### Data \r\nThe data used for this model corresponds to the RSS news feeds for arXiv updates accessed on 2022-02-04. In particular to the ones related to Machine Learning and AI:\r\n\r\n1. Artificial Intelligence\r\n1. Computation and Language\r\n1. Computer Vision and Pattern Recognition\r\n1. Information Retrieval\r\n1. Machine Learning (cs)\r\n1. Machine Learning (stat)", "### Performance \r\nN/A", "## Limitations\r\nThe model is limited to the papers fetched on 2022-02-04, that is, those papers are the only ones it can recommend." ]
text-generation
transformers
# Harry Potter Bot
{"tags": ["conversational"]}
bmdonnell/DialoGPT-medium-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Harry Potter Bot
[ "# Harry Potter Bot" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Harry Potter Bot" ]
automatic-speech-recognition
speechbrain
# Conformer Encoder/Decoder for Speech Translation This model was trained with [SpeechBrain](https://speechbrain.github.io), and is based on the Fisher Callhome recipie. The performance of the model is the following: | Release | CoVoSTv2 JA->EN Test BLEU | Custom Dataset Validation BLEU | Custom Dataset Test BLEU | GPUs | |:-------------:|:--------------:|:--------------:|:--------------:|:--------:| | 01-13-21 | 9.73 | 8.38 | 12.01 | 1xRTX 3090 | This model was trained on subtitled audio downloaded from YouTube, and was not fine-tuned on the CoVoSTv2 training set. When calculating the BLEU score for CoVoSTv2, the utterances were first preprocessed by the same pipeline that preprocessed the original data for the model, which includes removing all punctuation outside of apostrophes, and removing capitalization, similar to the data preprocessing done for the Fisher Callhome dataset in the speechbrain recipe. ## Pipeline description The system is trained with recordings sampled at 16kHz (single channel). The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed. ## Install SpeechBrain First of all, install SpeechBrain with the following command: ``` pip install speechbrain ``` ### Transcribing your own audio files (Spoken Japanese, to written English) ```python from speechbrain.pretrained import EncoderDecoderASR st_model = EncoderDecoderASR.from_hparams(source="bob80333/speechbrain_ja2en_st_63M_yt600h") st_model.transcribe_file("your_file_here.wav") ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Limitations: The model is likely to get caught in repetitions. The model is not very good at translation, which is reflected by its low BLEU scores. The outputs of this model are unlikely to be correct, do not rely on it for any serious purpose. This model was trained on data from Youtube, and has inherited whatever biases can be found in Youtube audio/subtitles. The creator of this model doesn't actually know Japanese.
{"language": "en", "tags": ["speech-translation", "CTC", "Attention", "Transformer", "pytorch", "speechbrain", "automatic-speech-recognition"], "metrics": ["BLEU"]}
bob80333/speechbrain_ja2en_st_63M_yt600h
null
[ "speechbrain", "speech-translation", "CTC", "Attention", "Transformer", "pytorch", "automatic-speech-recognition", "en", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #speechbrain #speech-translation #CTC #Attention #Transformer #pytorch #automatic-speech-recognition #en #region-us
Conformer Encoder/Decoder for Speech Translation ================================================ This model was trained with SpeechBrain, and is based on the Fisher Callhome recipie. The performance of the model is the following: This model was trained on subtitled audio downloaded from YouTube, and was not fine-tuned on the CoVoSTv2 training set. When calculating the BLEU score for CoVoSTv2, the utterances were first preprocessed by the same pipeline that preprocessed the original data for the model, which includes removing all punctuation outside of apostrophes, and removing capitalization, similar to the data preprocessing done for the Fisher Callhome dataset in the speechbrain recipe. Pipeline description -------------------- The system is trained with recordings sampled at 16kHz (single channel). The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe\_file* if needed. Install SpeechBrain ------------------- First of all, install SpeechBrain with the following command: ### Transcribing your own audio files (Spoken Japanese, to written English) ### Inference on GPU To perform inference on the GPU, add 'run\_opts={"device":"cuda"}' when calling the 'from\_hparams' method. ### Limitations: The model is likely to get caught in repetitions. The model is not very good at translation, which is reflected by its low BLEU scores. The outputs of this model are unlikely to be correct, do not rely on it for any serious purpose. This model was trained on data from Youtube, and has inherited whatever biases can be found in Youtube audio/subtitles. The creator of this model doesn't actually know Japanese.
[ "### Transcribing your own audio files (Spoken Japanese, to written English)", "### Inference on GPU\n\n\nTo perform inference on the GPU, add 'run\\_opts={\"device\":\"cuda\"}' when calling the 'from\\_hparams' method.", "### Limitations:\n\n\nThe model is likely to get caught in repetitions. The model is not very good at translation, which is reflected by its low BLEU scores.\nThe outputs of this model are unlikely to be correct, do not rely on it for any serious purpose.\nThis model was trained on data from Youtube, and has inherited whatever biases can be found in Youtube audio/subtitles.\nThe creator of this model doesn't actually know Japanese." ]
[ "TAGS\n#speechbrain #speech-translation #CTC #Attention #Transformer #pytorch #automatic-speech-recognition #en #region-us \n", "### Transcribing your own audio files (Spoken Japanese, to written English)", "### Inference on GPU\n\n\nTo perform inference on the GPU, add 'run\\_opts={\"device\":\"cuda\"}' when calling the 'from\\_hparams' method.", "### Limitations:\n\n\nThe model is likely to get caught in repetitions. The model is not very good at translation, which is reflected by its low BLEU scores.\nThe outputs of this model are unlikely to be correct, do not rely on it for any serious purpose.\nThis model was trained on data from Youtube, and has inherited whatever biases can be found in Youtube audio/subtitles.\nThe creator of this model doesn't actually know Japanese." ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-cnn-wei0 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset. It achieves the following results on the evaluation set: - Loss: 1.7149 - Rouge1: 24.2324 - Rouge2: 11.7178 - Rougel: 20.0508 - Rougelsum: 22.8698 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.9068 | 1.0 | 4786 | 1.7149 | 24.2324 | 11.7178 | 20.0508 | 22.8698 | 19.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["cnn_dailymail"], "metrics": ["rouge"], "model-index": [{"name": "t5-small-finetuned-cnn-wei0", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "cnn_dailymail", "type": "cnn_dailymail", "args": "3.0.0"}, "metrics": [{"type": "rouge", "value": 24.2324, "name": "Rouge1"}]}]}]}
bochaowei/t5-small-finetuned-cnn-wei0
null
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:cnn_dailymail", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-cnn_dailymail #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
t5-small-finetuned-cnn-wei0 =========================== This model is a fine-tuned version of t5-small on the cnn\_dailymail dataset. It achieves the following results on the evaluation set: * Loss: 1.7149 * Rouge1: 24.2324 * Rouge2: 11.7178 * Rougel: 20.0508 * Rougelsum: 22.8698 * Gen Len: 19.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 12 * eval\_batch\_size: 12 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.9.0+cu111 * Datasets 1.14.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-cnn_dailymail #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-cnn-wei1 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset. It achieves the following results on the evaluation set: - Loss: 1.6819 - Rouge1: 41.1796 - Rouge2: 18.9426 - Rougel: 29.2338 - Rougelsum: 38.4087 - Gen Len: 72.7607 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.8582 | 1.0 | 23927 | 1.6819 | 41.1796 | 18.9426 | 29.2338 | 38.4087 | 72.7607 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["cnn_dailymail"], "metrics": ["rouge"], "model-index": [{"name": "t5-small-finetuned-cnn-wei1", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "cnn_dailymail", "type": "cnn_dailymail", "args": "3.0.0"}, "metrics": [{"type": "rouge", "value": 41.1796, "name": "Rouge1"}]}]}]}
bochaowei/t5-small-finetuned-cnn-wei1
null
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:cnn_dailymail", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-cnn_dailymail #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
t5-small-finetuned-cnn-wei1 =========================== This model is a fine-tuned version of t5-small on the cnn\_dailymail dataset. It achieves the following results on the evaluation set: * Loss: 1.6819 * Rouge1: 41.1796 * Rouge2: 18.9426 * Rougel: 29.2338 * Rougelsum: 38.4087 * Gen Len: 72.7607 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 4e-05 * train\_batch\_size: 12 * eval\_batch\_size: 12 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.9.0+cu111 * Datasets 1.14.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-cnn_dailymail #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum-wei0 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.6289 - Rouge1: 25.7398 - Rouge2: 6.1361 - Rougel: 19.8262 - Rougelsum: 19.8284 - Gen Len: 18.7984 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.858 | 1.0 | 1701 | 2.6289 | 25.7398 | 6.1361 | 19.8262 | 19.8284 | 18.7984 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["xsum"], "metrics": ["rouge"], "model-index": [{"name": "t5-small-finetuned-xsum-wei0", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "xsum", "type": "xsum", "args": "default"}, "metrics": [{"type": "rouge", "value": 25.7398, "name": "Rouge1"}]}]}]}
bochaowei/t5-small-finetuned-xsum-wei0
null
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-xsum #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
t5-small-finetuned-xsum-wei0 ============================ This model is a fine-tuned version of t5-small on the xsum dataset. It achieves the following results on the evaluation set: * Loss: 2.6289 * Rouge1: 25.7398 * Rouge2: 6.1361 * Rougel: 19.8262 * Rougelsum: 19.8284 * Gen Len: 18.7984 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 12 * eval\_batch\_size: 12 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.9.0+cu111 * Datasets 1.14.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-xsum #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
20% of the training data --- license: apache-2.0 tags: - generated_from_trainer datasets: - xsum metrics: - rouge model-index: - name: t5-small-finetuned-xsum-wei1 results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: xsum type: xsum args: default metrics: - name: Rouge1 type: rouge value: 27.5875 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum-wei1 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.5287 - Rouge1: 27.5875 - Rouge2: 7.4083 - Rougel: 21.5654 - Rougelsum: 21.5716 - Gen Len: 18.8205 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.7677 | 1.0 | 3401 | 2.5441 | 27.4235 | 7.2208 | 21.3535 | 21.3636 | 18.8311 | | 2.735 | 2.0 | 6802 | 2.5287 | 27.5875 | 7.4083 | 21.5654 | 21.5716 | 18.8205 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
{}
bochaowei/t5-small-finetuned-xsum-wei1
null
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
20% of the training data ------------------------ license: apache-2.0 tags: * generated\_from\_trainer datasets: * xsum metrics: * rouge model-index: * name: t5-small-finetuned-xsum-wei1 results: + task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: xsum type: xsum args: default metrics: - name: Rouge1 type: rouge value: 27.5875 --- t5-small-finetuned-xsum-wei1 ============================ This model is a fine-tuned version of t5-small on the xsum dataset. It achieves the following results on the evaluation set: * Loss: 2.5287 * Rouge1: 27.5875 * Rouge2: 7.4083 * Rougel: 21.5654 * Rougelsum: 21.5716 * Gen Len: 18.8205 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 12 * eval\_batch\_size: 12 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.9.0+cu111 * Datasets 1.14.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum-wei2 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.4131 - Rouge1: 29.2287 - Rouge2: 8.4073 - Rougel: 23.0934 - Rougelsum: 23.0954 - Gen Len: 18.8236 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.633 | 1.0 | 17004 | 2.4131 | 29.2287 | 8.4073 | 23.0934 | 23.0954 | 18.8236 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["xsum"], "metrics": ["rouge"], "model-index": [{"name": "t5-small-finetuned-xsum-wei2", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "xsum", "type": "xsum", "args": "default"}, "metrics": [{"type": "rouge", "value": 29.2287, "name": "Rouge1"}]}]}]}
bochaowei/t5-small-finetuned-xsum-wei2
null
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-xsum #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
t5-small-finetuned-xsum-wei2 ============================ This model is a fine-tuned version of t5-small on the xsum dataset. It achieves the following results on the evaluation set: * Loss: 2.4131 * Rouge1: 29.2287 * Rouge2: 8.4073 * Rougel: 23.0934 * Rougelsum: 23.0954 * Gen Len: 18.8236 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 4e-05 * train\_batch\_size: 12 * eval\_batch\_size: 12 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.9.0+cu111 * Datasets 1.14.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-xsum #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3" ]
text-generation
transformers
# GPT2-Persian bolbolzaban/gpt2-persian is gpt2 language model that is trained with hyper parameters similar to standard gpt2-medium with following differences: 1. The context size is reduced from 1024 to 256 sub words in order to make the training affordable 2. Instead of BPE, google sentence piece tokenizor is used for tokenization. 3. The training dataset only include Persian text. All non-persian characters are replaced with especial tokens (e.g [LAT], [URL], [NUM]) Please refer to this [blog post](https://medium.com/@khashei/a-not-so-dangerous-ai-in-the-persian-language-39172a641c84) for further detail. Also try the model [here](https://huggingface.co/bolbolzaban/gpt2-persian?text=%D8%AF%D8%B1+%DB%8C%DA%A9+%D8%A7%D8%AA%D9%81%D8%A7%D9%82+%D8%B4%DA%AF%D9%81%D8%AA+%D8%A7%D9%86%DA%AF%DB%8C%D8%B2%D8%8C+%D9%BE%DA%98%D9%88%D9%87%D8%B4%DA%AF%D8%B1%D8%A7%D9%86) or on [Bolbolzaban.com](http://www.bolbolzaban.com/text). ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline, AutoTokenizer, GPT2LMHeadModel tokenizer = AutoTokenizer.from_pretrained('bolbolzaban/gpt2-persian') model = GPT2LMHeadModel.from_pretrained('bolbolzaban/gpt2-persian') generator = pipeline('text-generation', model, tokenizer=tokenizer, config={'max_length':256}) sample = generator('در یک اتفاق شگفت انگیز، پژوهشگران') ``` If you are using Tensorflow import TFGPT2LMHeadModel instead of GPT2LMHeadModel. ## Fine-tuning Find a basic fine-tuning example on this [Github Repo](https://github.com/khashei/bolbolzaban-gpt2-persian). ## Special Tokens gpt-persian is trained for the purpose of research on Persian poetry. Because of that all english words and numbers are replaced with special tokens and only standard Persian alphabet is used as part of input text. Here is one example: Original text: اگر آیفون یا آیپد شما دارای سیستم عامل iOS 14.3 یا iPadOS 14.3 یا نسخه‌های جدیدتر باشد Text used in training: اگر آیفون یا آیپد شما دارای سیستم عامل [LAT] [NUM] یا [LAT] [NUM] یا نسخه‌های جدیدتر باشد Please consider normalizing your input text using [Hazm](https://github.com/sobhe/hazm) or similar libraries and ensure only Persian characters are provided as input. If you want to use classical Persian poetry as input use [BOM] (begining of mesra) at the beginning of each verse (مصرع) followed by [EOS] (end of statement) at the end of each couplet (بیت). See following links for example: [[BOM] توانا بود](https://huggingface.co/bolbolzaban/gpt2-persian?text=%5BBOM%5D+%D8%AA%D9%88%D8%A7%D9%86%D8%A7+%D8%A8%D9%88%D8%AF) [[BOM] توانا بود هر که دانا بود [BOM]](https://huggingface.co/bolbolzaban/gpt2-persian?text=%5BBOM%5D+%D8%AA%D9%88%D8%A7%D9%86%D8%A7+%D8%A8%D9%88%D8%AF+%D9%87%D8%B1+%DA%A9%D9%87+%D8%AF%D8%A7%D9%86%D8%A7+%D8%A8%D9%88%D8%AF+%5BBOM%5D) [[BOM] توانا بود هر که دانا بود [BOM] ز دانش دل پیر](https://huggingface.co/bolbolzaban/gpt2-persian?text=%5BBOM%5D+%D8%AA%D9%88%D8%A7%D9%86%D8%A7+%D8%A8%D9%88%D8%AF+%D9%87%D8%B1+%DA%A9%D9%87+%D8%AF%D8%A7%D9%86%D8%A7+%D8%A8%D9%88%D8%AF+%5BBOM%5D+%D8%B2+%D8%AF%D8%A7%D9%86%D8%B4+%D8%AF%D9%84+%D9%BE%DB%8C%D8%B1) [[BOM] توانا بود هر که دانا بود [BOM] ز دانش دل پیربرنا بود [EOS]](https://huggingface.co/bolbolzaban/gpt2-persian?text=%5BBOM%5D+%D8%AA%D9%88%D8%A7%D9%86%D8%A7+%D8%A8%D9%88%D8%AF+%D9%87%D8%B1+%DA%A9%D9%87+%D8%AF%D8%A7%D9%86%D8%A7+%D8%A8%D9%88%D8%AF+%5BBOM%5D+%D8%B2+%D8%AF%D8%A7%D9%86%D8%B4+%D8%AF%D9%84+%D9%BE%DB%8C%D8%B1%D8%A8%D8%B1%D9%86%D8%A7+%D8%A8%D9%88%D8%AF++%5BEOS%5D) If you like to know about structure of classical Persian poetry refer to these [blog posts](https://medium.com/@khashei). ## Acknowledgment This project is supported by Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). ## Citation and Reference Please reference "bolbolzaban.com" website if you are using gpt2-persian in your research or commertial application. ## Contacts Please reachout on [Linkedin](https://www.linkedin.com/in/khashei/) or [Telegram](https://t.me/khasheia) if you have any question or need any help to use the model. Follow [Bolbolzaban](http://bolbolzaban.com/about) on [Twitter](https://twitter.com/bolbol_zaban), [Telegram](https://t.me/bolbol_zaban) or [Instagram](https://www.instagram.com/bolbolzaban/)
{"language": "fa", "license": "apache-2.0", "tags": ["farsi", "persian"]}
bolbolzaban/gpt2-persian
null
[ "transformers", "pytorch", "tf", "jax", "gpt2", "text-generation", "farsi", "persian", "fa", "doi:10.57967/hf/1207", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "fa" ]
TAGS #transformers #pytorch #tf #jax #gpt2 #text-generation #farsi #persian #fa #doi-10.57967/hf/1207 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# GPT2-Persian bolbolzaban/gpt2-persian is gpt2 language model that is trained with hyper parameters similar to standard gpt2-medium with following differences: 1. The context size is reduced from 1024 to 256 sub words in order to make the training affordable 2. Instead of BPE, google sentence piece tokenizor is used for tokenization. 3. The training dataset only include Persian text. All non-persian characters are replaced with especial tokens (e.g [LAT], [URL], [NUM]) Please refer to this blog post for further detail. Also try the model here or on URL. ## How to use You can use this model directly with a pipeline for text generation: If you are using Tensorflow import TFGPT2LMHeadModel instead of GPT2LMHeadModel. ## Fine-tuning Find a basic fine-tuning example on this Github Repo. ## Special Tokens gpt-persian is trained for the purpose of research on Persian poetry. Because of that all english words and numbers are replaced with special tokens and only standard Persian alphabet is used as part of input text. Here is one example: Original text: اگر آیفون یا آیپد شما دارای سیستم عامل iOS 14.3 یا iPadOS 14.3 یا نسخه‌های جدیدتر باشد Text used in training: اگر آیفون یا آیپد شما دارای سیستم عامل [LAT] [NUM] یا [LAT] [NUM] یا نسخه‌های جدیدتر باشد Please consider normalizing your input text using Hazm or similar libraries and ensure only Persian characters are provided as input. If you want to use classical Persian poetry as input use [BOM] (begining of mesra) at the beginning of each verse (مصرع) followed by [EOS] (end of statement) at the end of each couplet (بیت). See following links for example: [[BOM] توانا بود](URL [[BOM] توانا بود هر که دانا بود [BOM]](URL [[BOM] توانا بود هر که دانا بود [BOM] ز دانش دل پیر](URL [[BOM] توانا بود هر که دانا بود [BOM] ز دانش دل پیربرنا بود [EOS]](URL If you like to know about structure of classical Persian poetry refer to these blog posts. ## Acknowledgment This project is supported by Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). and Reference Please reference "URL" website if you are using gpt2-persian in your research or commertial application. ## Contacts Please reachout on Linkedin or Telegram if you have any question or need any help to use the model. Follow Bolbolzaban on Twitter, Telegram or Instagram
[ "# GPT2-Persian\nbolbolzaban/gpt2-persian is gpt2 language model that is trained with hyper parameters similar to standard gpt2-medium with following differences:\n1. The context size is reduced from 1024 to 256 sub words in order to make the training affordable \n2. Instead of BPE, google sentence piece tokenizor is used for tokenization.\n3. The training dataset only include Persian text. All non-persian characters are replaced with especial tokens (e.g [LAT], [URL], [NUM])\n\nPlease refer to this blog post for further detail. \nAlso try the model here or on URL.", "## How to use\nYou can use this model directly with a pipeline for text generation:\n\n\nIf you are using Tensorflow import TFGPT2LMHeadModel instead of GPT2LMHeadModel.", "## Fine-tuning\nFind a basic fine-tuning example on this Github Repo.", "## Special Tokens\ngpt-persian is trained for the purpose of research on Persian poetry. Because of that all english words and numbers are replaced with special tokens and only standard Persian alphabet is used as part of input text. Here is one example:\n\nOriginal text: اگر آیفون یا آیپد شما دارای سیستم عامل iOS 14.3 یا iPadOS 14.3 یا نسخه‌های جدیدتر باشد\n\nText used in training: اگر آیفون یا آیپد شما دارای سیستم عامل [LAT] [NUM] یا [LAT] [NUM] یا نسخه‌های جدیدتر باشد\n\nPlease consider normalizing your input text using Hazm or similar libraries and ensure only Persian characters are provided as input.\n\nIf you want to use classical Persian poetry as input use [BOM] (begining of mesra) at the beginning of each verse (مصرع) followed by [EOS] (end of statement) at the end of each couplet (بیت). \n\nSee following links for example:\n\n[[BOM] توانا بود](URL\n\n[[BOM] توانا بود هر که دانا بود [BOM]](URL\n\n[[BOM] توانا بود هر که دانا بود [BOM] ز دانش دل پیر](URL\n\n[[BOM] توانا بود هر که دانا بود [BOM] ز دانش دل پیربرنا بود [EOS]](URL\n\nIf you like to know about structure of classical Persian poetry refer to these blog posts.", "## Acknowledgment\nThis project is supported by Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).\nand Reference\nPlease reference \"URL\" website if you are using gpt2-persian in your research or commertial application.", "## Contacts\nPlease reachout on Linkedin or Telegram if you have any question or need any help to use the model.\n\nFollow Bolbolzaban on Twitter, Telegram or Instagram" ]
[ "TAGS\n#transformers #pytorch #tf #jax #gpt2 #text-generation #farsi #persian #fa #doi-10.57967/hf/1207 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# GPT2-Persian\nbolbolzaban/gpt2-persian is gpt2 language model that is trained with hyper parameters similar to standard gpt2-medium with following differences:\n1. The context size is reduced from 1024 to 256 sub words in order to make the training affordable \n2. Instead of BPE, google sentence piece tokenizor is used for tokenization.\n3. The training dataset only include Persian text. All non-persian characters are replaced with especial tokens (e.g [LAT], [URL], [NUM])\n\nPlease refer to this blog post for further detail. \nAlso try the model here or on URL.", "## How to use\nYou can use this model directly with a pipeline for text generation:\n\n\nIf you are using Tensorflow import TFGPT2LMHeadModel instead of GPT2LMHeadModel.", "## Fine-tuning\nFind a basic fine-tuning example on this Github Repo.", "## Special Tokens\ngpt-persian is trained for the purpose of research on Persian poetry. Because of that all english words and numbers are replaced with special tokens and only standard Persian alphabet is used as part of input text. Here is one example:\n\nOriginal text: اگر آیفون یا آیپد شما دارای سیستم عامل iOS 14.3 یا iPadOS 14.3 یا نسخه‌های جدیدتر باشد\n\nText used in training: اگر آیفون یا آیپد شما دارای سیستم عامل [LAT] [NUM] یا [LAT] [NUM] یا نسخه‌های جدیدتر باشد\n\nPlease consider normalizing your input text using Hazm or similar libraries and ensure only Persian characters are provided as input.\n\nIf you want to use classical Persian poetry as input use [BOM] (begining of mesra) at the beginning of each verse (مصرع) followed by [EOS] (end of statement) at the end of each couplet (بیت). \n\nSee following links for example:\n\n[[BOM] توانا بود](URL\n\n[[BOM] توانا بود هر که دانا بود [BOM]](URL\n\n[[BOM] توانا بود هر که دانا بود [BOM] ز دانش دل پیر](URL\n\n[[BOM] توانا بود هر که دانا بود [BOM] ز دانش دل پیربرنا بود [EOS]](URL\n\nIf you like to know about structure of classical Persian poetry refer to these blog posts.", "## Acknowledgment\nThis project is supported by Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).\nand Reference\nPlease reference \"URL\" website if you are using gpt2-persian in your research or commertial application.", "## Contacts\nPlease reachout on Linkedin or Telegram if you have any question or need any help to use the model.\n\nFollow Bolbolzaban on Twitter, Telegram or Instagram" ]
text-generation
transformers
# Personal DialoGPT Model
{"tags": ["conversational"]}
bonebambi/DialoGPT-small-ThakirClone
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Personal DialoGPT Model
[ "# Personal DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Personal DialoGPT Model" ]
audio-classification
transformers
# DistilWav2Vec2 Adult/Child Speech Classifier 37M DistilWav2Vec2 Adult/Child Speech Classifier is an audio classification model based on the [wav2vec 2.0](https://arxiv.org/abs/2006.11477) architecture. This model is a distilled version of [wav2vec2-adult-child-cls](https://huggingface.co/bookbot/wav2vec2-adult-child-cls) on a private adult/child speech classification dataset. This model was trained using HuggingFace's PyTorch framework. All training was done on a Tesla P100, provided by Kaggle. Training metrics were logged via Tensorboard. ## Model | Model | #params | Arch. | Training/Validation data (text) | | ------------------------------------- | ------- | ----------- | ----------------------------------------- | | `distil-wav2vec2-adult-child-cls-37m` | 37M | wav2vec 2.0 | Adult/Child Speech Classification Dataset | ## Evaluation Results The model achieves the following results on evaluation: | Dataset | Loss | Accuracy | F1 | | --------------------------------- | ------ | -------- | ------ | | Adult/Child Speech Classification | 0.1431 | 95.89% | 0.9624 | ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - `learning_rate`: 3e-05 - `train_batch_size`: 32 - `eval_batch_size`: 32 - `seed`: 42 - `gradient_accumulation_steps`: 4 - `total_train_batch_size`: 128 - `optimizer`: Adam with `betas=(0.9,0.999)` and `epsilon=1e-08` - `lr_scheduler_type`: linear - `lr_scheduler_warmup_ratio`: 0.1 - `num_epochs`: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | | :-----------: | :---: | :--: | :-------------: | :------: | :----: | | 0.2586 | 1.0 | 96 | 0.2257 | 0.9298 | 0.9363 | | 0.1917 | 2.0 | 192 | 0.1743 | 0.9460 | 0.9500 | | 0.1568 | 3.0 | 288 | 0.1701 | 0.9511 | 0.9545 | | 0.0965 | 4.0 | 384 | 0.1501 | 0.9548 | 0.9584 | | 0.1179 | 5.0 | 480 | 0.1431 | 0.9589 | 0.9624 | ## Disclaimer Do consider the biases which came from pre-training datasets that may be carried over into the results of this model. ## Authors DistilWav2Vec2 Adult/Child Speech Classifier was trained and evaluated by [Ananto Joyoadikusumo](https://anantoj.github.io/). All computation and development are done on Kaggle. ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.10.3
{"language": "en", "license": "apache-2.0", "tags": ["audio-classification", "generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distil-wav2vec2-adult-child-cls-37m", "results": []}]}
bookbot/distil-wav2vec2-adult-child-cls-37m
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "en", "arxiv:2006.11477", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2006.11477" ]
[ "en" ]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #audio-classification #generated_from_trainer #en #arxiv-2006.11477 #license-apache-2.0 #endpoints_compatible #region-us
DistilWav2Vec2 Adult/Child Speech Classifier 37M ================================================ DistilWav2Vec2 Adult/Child Speech Classifier is an audio classification model based on the wav2vec 2.0 architecture. This model is a distilled version of wav2vec2-adult-child-cls on a private adult/child speech classification dataset. This model was trained using HuggingFace's PyTorch framework. All training was done on a Tesla P100, provided by Kaggle. Training metrics were logged via Tensorboard. Model ----- Evaluation Results ------------------ The model achieves the following results on evaluation: Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * 'learning\_rate': 3e-05 * 'train\_batch\_size': 32 * 'eval\_batch\_size': 32 * 'seed': 42 * 'gradient\_accumulation\_steps': 4 * 'total\_train\_batch\_size': 128 * 'optimizer': Adam with 'betas=(0.9,0.999)' and 'epsilon=1e-08' * 'lr\_scheduler\_type': linear * 'lr\_scheduler\_warmup\_ratio': 0.1 * 'num\_epochs': 5 ### Training results Disclaimer ---------- Do consider the biases which came from pre-training datasets that may be carried over into the results of this model. Authors ------- DistilWav2Vec2 Adult/Child Speech Classifier was trained and evaluated by Ananto Joyoadikusumo. All computation and development are done on Kaggle. ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.2+cu102 * Datasets 1.18.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* 'learning\\_rate': 3e-05\n* 'train\\_batch\\_size': 32\n* 'eval\\_batch\\_size': 32\n* 'seed': 42\n* 'gradient\\_accumulation\\_steps': 4\n* 'total\\_train\\_batch\\_size': 128\n* 'optimizer': Adam with 'betas=(0.9,0.999)' and 'epsilon=1e-08'\n* 'lr\\_scheduler\\_type': linear\n* 'lr\\_scheduler\\_warmup\\_ratio': 0.1\n* 'num\\_epochs': 5", "### Training results\n\n\n\nDisclaimer\n----------\n\n\nDo consider the biases which came from pre-training datasets that may be carried over into the results of this model.\n\n\nAuthors\n-------\n\n\nDistilWav2Vec2 Adult/Child Speech Classifier was trained and evaluated by Ananto Joyoadikusumo. All computation and development are done on Kaggle.", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #audio-classification #generated_from_trainer #en #arxiv-2006.11477 #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* 'learning\\_rate': 3e-05\n* 'train\\_batch\\_size': 32\n* 'eval\\_batch\\_size': 32\n* 'seed': 42\n* 'gradient\\_accumulation\\_steps': 4\n* 'total\\_train\\_batch\\_size': 128\n* 'optimizer': Adam with 'betas=(0.9,0.999)' and 'epsilon=1e-08'\n* 'lr\\_scheduler\\_type': linear\n* 'lr\\_scheduler\\_warmup\\_ratio': 0.1\n* 'num\\_epochs': 5", "### Training results\n\n\n\nDisclaimer\n----------\n\n\nDo consider the biases which came from pre-training datasets that may be carried over into the results of this model.\n\n\nAuthors\n-------\n\n\nDistilWav2Vec2 Adult/Child Speech Classifier was trained and evaluated by Ananto Joyoadikusumo. All computation and development are done on Kaggle.", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
audio-classification
transformers
# DistilWav2Vec2 Adult/Child Speech Classifier 52M DistilWav2Vec2 Adult/Child Speech Classifier is an audio classification model based on the [wav2vec 2.0](https://arxiv.org/abs/2006.11477) architecture. This model is a distilled version of [wav2vec2-adult-child-cls](https://huggingface.co/bookbot/wav2vec2-adult-child-cls) on a private adult/child speech classification dataset. This model was trained using HuggingFace's PyTorch framework. All training was done on a Tesla P100, provided by Kaggle. Training metrics were logged via Tensorboard. ## Model | Model | #params | Arch. | Training/Validation data (text) | | ------------------------------------- | ------- | ----------- | ----------------------------------------- | | `distil-wav2vec2-adult-child-cls-52m` | 52M | wav2vec 2.0 | Adult/Child Speech Classification Dataset | ## Evaluation Results The model achieves the following results on evaluation: | Dataset | Loss | Accuracy | F1 | | --------------------------------- | ------ | -------- | ------ | | Adult/Child Speech Classification | 0.1301 | 96.03% | 0.9639 | ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - `learning_rate`: 3e-05 - `train_batch_size`: 32 - `eval_batch_size`: 32 - `seed`: 42 - `gradient_accumulation_steps`: 4 - `total_train_batch_size`: 128 - `optimizer`: Adam with `betas=(0.9,0.999)` and `epsilon=1e-08` - `lr_scheduler_type`: linear - `lr_scheduler_warmup_ratio`: 0.1 - `num_epochs`: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | | :-----------: | :---: | :--: | :-------------: | :------: | :----: | | 0.212 | 1.0 | 96 | 0.1561 | 0.9561 | 0.9596 | | 0.1523 | 2.0 | 192 | 0.1408 | 0.9575 | 0.9616 | | 0.0844 | 3.0 | 288 | 0.1301 | 0.9603 | 0.9639 | ## Disclaimer Do consider the biases which came from pre-training datasets that may be carried over into the results of this model. ## Authors DistilWav2Vec2 Adult/Child Speech Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Kaggle. ## Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.10.3
{"language": "en", "license": "apache-2.0", "tags": ["audio-classification", "generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distil-wav2vec2-adult-child-cls-52m", "results": []}]}
bookbot/distil-wav2vec2-adult-child-cls-52m
null
[ "transformers", "pytorch", "tensorboard", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "en", "arxiv:2006.11477", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2006.11477" ]
[ "en" ]
TAGS #transformers #pytorch #tensorboard #safetensors #wav2vec2 #audio-classification #generated_from_trainer #en #arxiv-2006.11477 #license-apache-2.0 #endpoints_compatible #region-us
DistilWav2Vec2 Adult/Child Speech Classifier 52M ================================================ DistilWav2Vec2 Adult/Child Speech Classifier is an audio classification model based on the wav2vec 2.0 architecture. This model is a distilled version of wav2vec2-adult-child-cls on a private adult/child speech classification dataset. This model was trained using HuggingFace's PyTorch framework. All training was done on a Tesla P100, provided by Kaggle. Training metrics were logged via Tensorboard. Model ----- Evaluation Results ------------------ The model achieves the following results on evaluation: Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * 'learning\_rate': 3e-05 * 'train\_batch\_size': 32 * 'eval\_batch\_size': 32 * 'seed': 42 * 'gradient\_accumulation\_steps': 4 * 'total\_train\_batch\_size': 128 * 'optimizer': Adam with 'betas=(0.9,0.999)' and 'epsilon=1e-08' * 'lr\_scheduler\_type': linear * 'lr\_scheduler\_warmup\_ratio': 0.1 * 'num\_epochs': 3 ### Training results Disclaimer ---------- Do consider the biases which came from pre-training datasets that may be carried over into the results of this model. Authors ------- DistilWav2Vec2 Adult/Child Speech Classifier was trained and evaluated by Wilson Wongso. All computation and development are done on Kaggle. Framework versions ------------------ * Transformers 4.16.2 * Pytorch 1.10.2+cu102 * Datasets 1.18.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* 'learning\\_rate': 3e-05\n* 'train\\_batch\\_size': 32\n* 'eval\\_batch\\_size': 32\n* 'seed': 42\n* 'gradient\\_accumulation\\_steps': 4\n* 'total\\_train\\_batch\\_size': 128\n* 'optimizer': Adam with 'betas=(0.9,0.999)' and 'epsilon=1e-08'\n* 'lr\\_scheduler\\_type': linear\n* 'lr\\_scheduler\\_warmup\\_ratio': 0.1\n* 'num\\_epochs': 3", "### Training results\n\n\n\nDisclaimer\n----------\n\n\nDo consider the biases which came from pre-training datasets that may be carried over into the results of this model.\n\n\nAuthors\n-------\n\n\nDistilWav2Vec2 Adult/Child Speech Classifier was trained and evaluated by Wilson Wongso. All computation and development are done on Kaggle.\n\n\nFramework versions\n------------------\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #safetensors #wav2vec2 #audio-classification #generated_from_trainer #en #arxiv-2006.11477 #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* 'learning\\_rate': 3e-05\n* 'train\\_batch\\_size': 32\n* 'eval\\_batch\\_size': 32\n* 'seed': 42\n* 'gradient\\_accumulation\\_steps': 4\n* 'total\\_train\\_batch\\_size': 128\n* 'optimizer': Adam with 'betas=(0.9,0.999)' and 'epsilon=1e-08'\n* 'lr\\_scheduler\\_type': linear\n* 'lr\\_scheduler\\_warmup\\_ratio': 0.1\n* 'num\\_epochs': 3", "### Training results\n\n\n\nDisclaimer\n----------\n\n\nDo consider the biases which came from pre-training datasets that may be carried over into the results of this model.\n\n\nAuthors\n-------\n\n\nDistilWav2Vec2 Adult/Child Speech Classifier was trained and evaluated by Wilson Wongso. All computation and development are done on Kaggle.\n\n\nFramework versions\n------------------\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
audio-classification
transformers
# DistilWav2Vec2 XLS-R Adult/Child Speech Classifier 64M DistilWav2Vec2 XLS-R Adult/Child Speech Classifier is an audio classification model based on the [XLS-R](https://arxiv.org/abs/2111.09296) architecture. This model is a distilled version of [wav2vec2-xls-r-adult-child-cls](https://huggingface.co/bookbot/wav2vec2-xls-r-adult-child-cls) on a private adult/child speech classification dataset. This model was trained using HuggingFace's PyTorch framework. All training was done on a Tesla P100, provided by Kaggle. Training metrics were logged via Tensorboard. ## Model | Model | #params | Arch. | Training/Validation data (text) | | ------------------------------------------- | ------- | ----- | ----------------------------------------- | | `distil-wav2vec2-xls-r-adult-child-cls-64m` | 64M | XLS-R | Adult/Child Speech Classification Dataset | ## Evaluation Results The model achieves the following results on evaluation: | Dataset | Loss | Accuracy | F1 | | --------------------------------- | ------ | -------- | ------ | | Adult/Child Speech Classification | 0.2571 | 93.86% | 0.9425 | ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - `learning_rate`: 3e-05 - `train_batch_size`: 16 - `eval_batch_size`: 16 - `seed`: 42 - `gradient_accumulation_steps`: 4 - `total_train_batch_size`: 64 - `optimizer`: Adam with `betas=(0.9,0.999)` and `epsilon=1e-08` - `lr_scheduler_type`: linear - `lr_scheduler_warmup_ratio`: 0.1 - `num_epochs`: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | | :-----------: | :---: | :--: | :-------------: | :------: | :----: | | 0.5509 | 1.0 | 191 | 0.3685 | 0.9086 | 0.9131 | | 0.4543 | 2.0 | 382 | 0.3113 | 0.9247 | 0.9285 | | 0.409 | 3.0 | 573 | 0.2723 | 0.9372 | 0.9418 | | 0.3024 | 4.0 | 764 | 0.2786 | 0.9381 | 0.9417 | | 0.3103 | 5.0 | 955 | 0.2571 | 0.9386 | 0.9425 | ## Disclaimer Do consider the biases which came from pre-training datasets that may be carried over into the results of this model. ## Authors DistilWav2Vec2 XLS-R Adult/Child Speech Classifier was trained and evaluated by [Ananto Joyoadikusumo](https://anantoj.github.io/). All computation and development are done on Kaggle. ## Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
{"language": "en", "license": "apache-2.0", "tags": ["audio-classification", "generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distil-wav2vec2-xls-r-adult-child-cls-64m", "results": []}]}
bookbot/distil-wav2vec2-xls-r-adult-child-cls-64m
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "en", "arxiv:2111.09296", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2111.09296" ]
[ "en" ]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #audio-classification #generated_from_trainer #en #arxiv-2111.09296 #license-apache-2.0 #endpoints_compatible #region-us
DistilWav2Vec2 XLS-R Adult/Child Speech Classifier 64M ====================================================== DistilWav2Vec2 XLS-R Adult/Child Speech Classifier is an audio classification model based on the XLS-R architecture. This model is a distilled version of wav2vec2-xls-r-adult-child-cls on a private adult/child speech classification dataset. This model was trained using HuggingFace's PyTorch framework. All training was done on a Tesla P100, provided by Kaggle. Training metrics were logged via Tensorboard. Model ----- Evaluation Results ------------------ The model achieves the following results on evaluation: Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * 'learning\_rate': 3e-05 * 'train\_batch\_size': 16 * 'eval\_batch\_size': 16 * 'seed': 42 * 'gradient\_accumulation\_steps': 4 * 'total\_train\_batch\_size': 64 * 'optimizer': Adam with 'betas=(0.9,0.999)' and 'epsilon=1e-08' * 'lr\_scheduler\_type': linear * 'lr\_scheduler\_warmup\_ratio': 0.1 * 'num\_epochs': 5 ### Training results Disclaimer ---------- Do consider the biases which came from pre-training datasets that may be carried over into the results of this model. Authors ------- DistilWav2Vec2 XLS-R Adult/Child Speech Classifier was trained and evaluated by Ananto Joyoadikusumo. All computation and development are done on Kaggle. Framework versions ------------------ * Transformers 4.17.0.dev0 * Pytorch 1.10.2+cu102 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* 'learning\\_rate': 3e-05\n* 'train\\_batch\\_size': 16\n* 'eval\\_batch\\_size': 16\n* 'seed': 42\n* 'gradient\\_accumulation\\_steps': 4\n* 'total\\_train\\_batch\\_size': 64\n* 'optimizer': Adam with 'betas=(0.9,0.999)' and 'epsilon=1e-08'\n* 'lr\\_scheduler\\_type': linear\n* 'lr\\_scheduler\\_warmup\\_ratio': 0.1\n* 'num\\_epochs': 5", "### Training results\n\n\n\nDisclaimer\n----------\n\n\nDo consider the biases which came from pre-training datasets that may be carried over into the results of this model.\n\n\nAuthors\n-------\n\n\nDistilWav2Vec2 XLS-R Adult/Child Speech Classifier was trained and evaluated by Ananto Joyoadikusumo. All computation and development are done on Kaggle.\n\n\nFramework versions\n------------------\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #audio-classification #generated_from_trainer #en #arxiv-2111.09296 #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* 'learning\\_rate': 3e-05\n* 'train\\_batch\\_size': 16\n* 'eval\\_batch\\_size': 16\n* 'seed': 42\n* 'gradient\\_accumulation\\_steps': 4\n* 'total\\_train\\_batch\\_size': 64\n* 'optimizer': Adam with 'betas=(0.9,0.999)' and 'epsilon=1e-08'\n* 'lr\\_scheduler\\_type': linear\n* 'lr\\_scheduler\\_warmup\\_ratio': 0.1\n* 'num\\_epochs': 5", "### Training results\n\n\n\nDisclaimer\n----------\n\n\nDo consider the biases which came from pre-training datasets that may be carried over into the results of this model.\n\n\nAuthors\n-------\n\n\nDistilWav2Vec2 XLS-R Adult/Child Speech Classifier was trained and evaluated by Ananto Joyoadikusumo. All computation and development are done on Kaggle.\n\n\nFramework versions\n------------------\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
audio-classification
transformers
# DistilWav2Vec2 XLS-R Adult/Child Speech Classifier 89M DistilWav2Vec2 XLS-R Adult/Child Speech Classifier is an audio classification model based on the [XLS-R](https://arxiv.org/abs/2111.09296) architecture. This model is a distilled version of [wav2vec2-xls-r-adult-child-cls](https://huggingface.co/bookbot/wav2vec2-xls-r-adult-child-cls) on a private adult/child speech classification dataset. This model was trained using HuggingFace's PyTorch framework. All training was done on a Tesla P100, provided by Kaggle. Training metrics were logged via Tensorboard. ## Model | Model | #params | Arch. | Training/Validation data (text) | | ------------------------------------------- | ------- | ----- | ----------------------------------------- | | `distil-wav2vec2-xls-r-adult-child-cls-89m` | 89M | XLS-R | Adult/Child Speech Classification Dataset | ## Evaluation Results The model achieves the following results on evaluation: | Dataset | Loss | Accuracy | F1 | | --------------------------------- | ------ | -------- | ------ | | Adult/Child Speech Classification | 0.3048 | 93.54% | 0.9420 | ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - `learning_rate`: 3e-05 - `train_batch_size`: 32 - `eval_batch_size`: 32 - `seed`: 42 - `gradient_accumulation_steps`: 4 - `total_train_batch_size`: 128 - `optimizer`: Adam with `betas=(0.9,0.999)` and `epsilon=1e-08` - `lr_scheduler_type`: linear - `lr_scheduler_warmup_ratio`: 0.1 - `num_epochs`: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | | :-----------: | :---: | :--: | :-------------: | :------: | :----: | | 0.7711 | 1.0 | 96 | 0.5413 | 0.9017 | 0.9156 | | 0.5551 | 2.0 | 192 | 0.4627 | 0.9164 | 0.9272 | | 0.4166 | 3.0 | 288 | 0.3832 | 0.9261 | 0.9352 | | 0.3928 | 4.0 | 384 | 0.3242 | 0.9331 | 0.9406 | | 0.3622 | 5.0 | 480 | 0.3048 | 0.9354 | 0.9420 | ## Disclaimer Do consider the biases which came from pre-training datasets that may be carried over into the results of this model. ## Authors DistilWav2Vec2 XLS-R Adult/Child Speech Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Kaggle. ## Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
{"language": "en", "license": "apache-2.0", "tags": ["audio-classification", "generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distil-wav2vec2-xls-r-adult-child-cls-89m", "results": []}]}
bookbot/distil-wav2vec2-xls-r-adult-child-cls-89m
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "en", "arxiv:2111.09296", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2111.09296" ]
[ "en" ]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #audio-classification #generated_from_trainer #en #arxiv-2111.09296 #license-apache-2.0 #endpoints_compatible #region-us
DistilWav2Vec2 XLS-R Adult/Child Speech Classifier 89M ====================================================== DistilWav2Vec2 XLS-R Adult/Child Speech Classifier is an audio classification model based on the XLS-R architecture. This model is a distilled version of wav2vec2-xls-r-adult-child-cls on a private adult/child speech classification dataset. This model was trained using HuggingFace's PyTorch framework. All training was done on a Tesla P100, provided by Kaggle. Training metrics were logged via Tensorboard. Model ----- Evaluation Results ------------------ The model achieves the following results on evaluation: Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * 'learning\_rate': 3e-05 * 'train\_batch\_size': 32 * 'eval\_batch\_size': 32 * 'seed': 42 * 'gradient\_accumulation\_steps': 4 * 'total\_train\_batch\_size': 128 * 'optimizer': Adam with 'betas=(0.9,0.999)' and 'epsilon=1e-08' * 'lr\_scheduler\_type': linear * 'lr\_scheduler\_warmup\_ratio': 0.1 * 'num\_epochs': 5 ### Training results Disclaimer ---------- Do consider the biases which came from pre-training datasets that may be carried over into the results of this model. Authors ------- DistilWav2Vec2 XLS-R Adult/Child Speech Classifier was trained and evaluated by Wilson Wongso. All computation and development are done on Kaggle. Framework versions ------------------ * Transformers 4.17.0.dev0 * Pytorch 1.10.2+cu102 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* 'learning\\_rate': 3e-05\n* 'train\\_batch\\_size': 32\n* 'eval\\_batch\\_size': 32\n* 'seed': 42\n* 'gradient\\_accumulation\\_steps': 4\n* 'total\\_train\\_batch\\_size': 128\n* 'optimizer': Adam with 'betas=(0.9,0.999)' and 'epsilon=1e-08'\n* 'lr\\_scheduler\\_type': linear\n* 'lr\\_scheduler\\_warmup\\_ratio': 0.1\n* 'num\\_epochs': 5", "### Training results\n\n\n\nDisclaimer\n----------\n\n\nDo consider the biases which came from pre-training datasets that may be carried over into the results of this model.\n\n\nAuthors\n-------\n\n\nDistilWav2Vec2 XLS-R Adult/Child Speech Classifier was trained and evaluated by Wilson Wongso. All computation and development are done on Kaggle.\n\n\nFramework versions\n------------------\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #audio-classification #generated_from_trainer #en #arxiv-2111.09296 #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* 'learning\\_rate': 3e-05\n* 'train\\_batch\\_size': 32\n* 'eval\\_batch\\_size': 32\n* 'seed': 42\n* 'gradient\\_accumulation\\_steps': 4\n* 'total\\_train\\_batch\\_size': 128\n* 'optimizer': Adam with 'betas=(0.9,0.999)' and 'epsilon=1e-08'\n* 'lr\\_scheduler\\_type': linear\n* 'lr\\_scheduler\\_warmup\\_ratio': 0.1\n* 'num\\_epochs': 5", "### Training results\n\n\n\nDisclaimer\n----------\n\n\nDo consider the biases which came from pre-training datasets that may be carried over into the results of this model.\n\n\nAuthors\n-------\n\n\nDistilWav2Vec2 XLS-R Adult/Child Speech Classifier was trained and evaluated by Wilson Wongso. All computation and development are done on Kaggle.\n\n\nFramework versions\n------------------\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
text-generation
transformers
## GPT-2 Indonesian Medium Kids Stories GPT-2 Indonesian Medium Kids Stories is a causal language model based on the [OpenAI GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) model. The model was originally the pre-trained [GPT2 Medium Indonesian](https://huggingface.co/flax-community/gpt2-medium-indonesian) model, which was then fine-tuned on Indonesian kids' stories from [Room To Read](https://literacycloud.org/) and [Let's Read](https://reader.letsreadasia.org/). 10% of the dataset was kept for evaluation purposes. The pre-trained model was fine-tuned and achieved an evaluation loss of 3.579 and an evaluation perplexity of 35.84. Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless. ## Model | Model | #params | Arch. | Training/Validation data (text) | | ------------------------------- | ------- | ----------- | --------------------------------- | | `gpt2-indo-medium-kids-stories` | 345M | GPT2 Medium | Indonesian Kids' Stories (860 KB) | ## Evaluation Results The model was fine-tuned for 3 epochs. | Epoch | Training Loss | Validation Loss | | ----- | ------------- | --------------- | | 1 | 3.909100 | 3.627678 | | 2 | 3.375300 | 3.562854 | | 3 | 3.113300 | 3.578999 | ## How to Use (PyTorch) ### As Causal Language Model ```python from transformers import pipeline pretrained_name = "bookbot/gpt2-indo-medium-kids-stories" nlp = pipeline( "text-generation", model=pretrained_name, tokenizer=pretrained_name ) nlp("Archie sedang mengendarai roket ke planet Mars.") ``` ### Feature Extraction in PyTorch ```python from transformers import GPT2LMHeadModel, GPT2TokenizerFast pretrained_name = "bookbot/gpt2-indo-medium-kids-stories" model = GPT2LMHeadModel.from_pretrained(pretrained_name) tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_name) prompt = "Archie sedang mengendarai roket ke planet Mars." encoded_input = tokenizer(prompt, return_tensors='pt') output = model(**encoded_input) ``` ## Disclaimer Do consider the biases which come from both the pre-trained GPT-2 model and the Indonesian Kids' Stories dataset that may be carried over into the results of this model. ## Author GPT-2 Indonesian Medium Kids Stories was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
{"language": "id", "license": "mit", "tags": ["gpt2-indo-medium-kids-stories"], "widget": [{"text": "Archie sedang mengendarai roket ke planet Mars."}]}
bookbot/gpt2-indo-medium-kids-stories
null
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "gpt2-indo-medium-kids-stories", "id", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "id" ]
TAGS #transformers #pytorch #safetensors #gpt2 #text-generation #gpt2-indo-medium-kids-stories #id #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
GPT-2 Indonesian Medium Kids Stories ------------------------------------ GPT-2 Indonesian Medium Kids Stories is a causal language model based on the OpenAI GPT-2 model. The model was originally the pre-trained GPT2 Medium Indonesian model, which was then fine-tuned on Indonesian kids' stories from Room To Read and Let's Read. 10% of the dataset was kept for evaluation purposes. The pre-trained model was fine-tuned and achieved an evaluation loss of 3.579 and an evaluation perplexity of 35.84. Hugging Face's 'Trainer' class from the Transformers library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless. Model ----- Evaluation Results ------------------ The model was fine-tuned for 3 epochs. Epoch: 1, Training Loss: 3.909100, Validation Loss: 3.627678 Epoch: 2, Training Loss: 3.375300, Validation Loss: 3.562854 Epoch: 3, Training Loss: 3.113300, Validation Loss: 3.578999 How to Use (PyTorch) -------------------- ### As Causal Language Model ### Feature Extraction in PyTorch Disclaimer ---------- Do consider the biases which come from both the pre-trained GPT-2 model and the Indonesian Kids' Stories dataset that may be carried over into the results of this model. Author ------ GPT-2 Indonesian Medium Kids Stories was trained and evaluated by Wilson Wongso. All computation and development are done on Google Colaboratory using their free GPU access.
[ "### As Causal Language Model", "### Feature Extraction in PyTorch\n\n\nDisclaimer\n----------\n\n\nDo consider the biases which come from both the pre-trained GPT-2 model and the Indonesian Kids' Stories dataset that may be carried over into the results of this model.\n\n\nAuthor\n------\n\n\nGPT-2 Indonesian Medium Kids Stories was trained and evaluated by Wilson Wongso. All computation and development are done on Google Colaboratory using their free GPU access." ]
[ "TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #gpt2-indo-medium-kids-stories #id #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### As Causal Language Model", "### Feature Extraction in PyTorch\n\n\nDisclaimer\n----------\n\n\nDo consider the biases which come from both the pre-trained GPT-2 model and the Indonesian Kids' Stories dataset that may be carried over into the results of this model.\n\n\nAuthor\n------\n\n\nGPT-2 Indonesian Medium Kids Stories was trained and evaluated by Wilson Wongso. All computation and development are done on Google Colaboratory using their free GPU access." ]
text-generation
transformers
## GPT-2 Indonesian Small Kids Stories GPT-2 Indonesian Small Kids Stories is a causal language model based on the [OpenAI GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) model. The model was originally the pre-trained [GPT2 Small Indonesian](https://huggingface.co/flax-community/gpt2-small-indonesian) model, which was then fine-tuned on Indonesian kids' stories from [Room To Read](https://literacycloud.org/) and [Let's Read](https://reader.letsreadasia.org/). 10% of the dataset was kept for evaluation purposes. The pre-trained model was fine-tuned and achieved an evaluation loss of 3.777 and an evaluation perplexity of 43.68. Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless. ## Model | Model | #params | Arch. | Training/Validation data (text) | | ------------------------------ | ------- | ---------- | --------------------------------- | | `gpt2-indo-small-kids-stories` | 124M | GPT2 Small | Indonesian Kids' Stories (860 KB) | ## Evaluation Results The model was fine-tuned for 10 epochs. | Epoch | Training Loss | Validation Loss | | ----- | ------------- | --------------- | | 1 | 4.259600 | 4.020201 | | 2 | 3.979100 | 3.911295 | | 3 | 3.818300 | 3.849313 | | 4 | 3.691600 | 3.809931 | | 5 | 3.589300 | 3.789201 | | 6 | 3.506200 | 3.778665 | | 7 | 3.439200 | 3.774871 | | 8 | 3.387600 | 3.774859 | | 9 | 3.351300 | 3.776672 | | 10 | 3.330100 | 3.776935 | ## How to Use (PyTorch) ### As Causal Language Model ```python from transformers import pipeline pretrained_name = "bookbot/gpt2-indo-small-kids-stories" nlp = pipeline( "text-generation", model=pretrained_name, tokenizer=pretrained_name ) nlp("Archie sedang mengendarai roket ke planet Mars.") ``` ### Feature Extraction in PyTorch ```python from transformers import GPT2LMHeadModel, GPT2TokenizerFast pretrained_name = "bookbot/gpt2-indo-small-kids-stories" model = GPT2LMHeadModel.from_pretrained(pretrained_name) tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_name) prompt = "Archie sedang mengendarai roket ke planet Mars." encoded_input = tokenizer(prompt, return_tensors='pt') output = model(**encoded_input) ``` ## Disclaimer Do consider the biases which come from both the pre-trained GPT-2 model and the Indonesian Kids' Stories dataset that may be carried over into the results of this model. ## Author GPT-2 Indonesian Small Kids Stories was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
{"language": "id", "license": "mit", "tags": ["gpt2-indo-small-kids-stories"], "widget": [{"text": "Archie sedang mengendarai roket ke planet Mars."}]}
bookbot/gpt2-indo-small-kids-stories
null
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "gpt2-indo-small-kids-stories", "id", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "id" ]
TAGS #transformers #pytorch #safetensors #gpt2 #text-generation #gpt2-indo-small-kids-stories #id #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
GPT-2 Indonesian Small Kids Stories ----------------------------------- GPT-2 Indonesian Small Kids Stories is a causal language model based on the OpenAI GPT-2 model. The model was originally the pre-trained GPT2 Small Indonesian model, which was then fine-tuned on Indonesian kids' stories from Room To Read and Let's Read. 10% of the dataset was kept for evaluation purposes. The pre-trained model was fine-tuned and achieved an evaluation loss of 3.777 and an evaluation perplexity of 43.68. Hugging Face's 'Trainer' class from the Transformers library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless. Model ----- Evaluation Results ------------------ The model was fine-tuned for 10 epochs. Epoch: 1, Training Loss: 4.259600, Validation Loss: 4.020201 Epoch: 2, Training Loss: 3.979100, Validation Loss: 3.911295 Epoch: 3, Training Loss: 3.818300, Validation Loss: 3.849313 Epoch: 4, Training Loss: 3.691600, Validation Loss: 3.809931 Epoch: 5, Training Loss: 3.589300, Validation Loss: 3.789201 Epoch: 6, Training Loss: 3.506200, Validation Loss: 3.778665 Epoch: 7, Training Loss: 3.439200, Validation Loss: 3.774871 Epoch: 8, Training Loss: 3.387600, Validation Loss: 3.774859 Epoch: 9, Training Loss: 3.351300, Validation Loss: 3.776672 Epoch: 10, Training Loss: 3.330100, Validation Loss: 3.776935 How to Use (PyTorch) -------------------- ### As Causal Language Model ### Feature Extraction in PyTorch Disclaimer ---------- Do consider the biases which come from both the pre-trained GPT-2 model and the Indonesian Kids' Stories dataset that may be carried over into the results of this model. Author ------ GPT-2 Indonesian Small Kids Stories was trained and evaluated by Wilson Wongso. All computation and development are done on Google Colaboratory using their free GPU access.
[ "### As Causal Language Model", "### Feature Extraction in PyTorch\n\n\nDisclaimer\n----------\n\n\nDo consider the biases which come from both the pre-trained GPT-2 model and the Indonesian Kids' Stories dataset that may be carried over into the results of this model.\n\n\nAuthor\n------\n\n\nGPT-2 Indonesian Small Kids Stories was trained and evaluated by Wilson Wongso. All computation and development are done on Google Colaboratory using their free GPU access." ]
[ "TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #gpt2-indo-small-kids-stories #id #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### As Causal Language Model", "### Feature Extraction in PyTorch\n\n\nDisclaimer\n----------\n\n\nDo consider the biases which come from both the pre-trained GPT-2 model and the Indonesian Kids' Stories dataset that may be carried over into the results of this model.\n\n\nAuthor\n------\n\n\nGPT-2 Indonesian Small Kids Stories was trained and evaluated by Wilson Wongso. All computation and development are done on Google Colaboratory using their free GPU access." ]
audio-classification
transformers
# Wav2Vec2 Adult/Child Speech Classifier Wav2Vec2 Adult/Child Speech Classifier is an audio classification model based on the [wav2vec 2.0](https://arxiv.org/abs/2006.11477) architecture. This model is a fine-tuned version of [wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on a private adult/child speech classification dataset. This model was trained using HuggingFace's PyTorch framework. All training was done on a Tesla P100, provided by Kaggle. Training metrics were logged via Tensorboard. ## Model | Model | #params | Arch. | Training/Validation data (text) | | -------------------------- | ------- | ----------- | ----------------------------------------- | | `wav2vec2-adult-child-cls` | 91M | wav2vec 2.0 | Adult/Child Speech Classification Dataset | ## Evaluation Results The model achieves the following results on evaluation: | Dataset | Loss | Accuracy | F1 | | --------------------------------- | ------ | -------- | ------ | | Adult/Child Speech Classification | 0.1682 | 95.80% | 0.9618 | ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - `learning_rate`: 3e-05 - `train_batch_size`: 32 - `eval_batch_size`: 32 - `seed`: 42 - `optimizer`: Adam with `betas=(0.9,0.999)` and `epsilon=1e-08` - `lr_scheduler_type`: linear - `lr_scheduler_warmup_ratio`: 0.1 - `num_epochs`: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | | :-----------: | :---: | :--: | :-------------: | :------: | :----: | | 0.2709 | 1.0 | 384 | 0.2616 | 0.9104 | 0.9142 | | 0.2112 | 2.0 | 768 | 0.1826 | 0.9386 | 0.9421 | | 0.1755 | 3.0 | 1152 | 0.1898 | 0.9354 | 0.9428 | | 0.0915 | 4.0 | 1536 | 0.1682 | 0.9580 | 0.9618 | | 0.1042 | 5.0 | 1920 | 0.1717 | 0.9511 | 0.9554 | ## Disclaimer Do consider the biases which came from pre-training datasets that may be carried over into the results of this model. ## Authors Wav2Vec2 Adult/Child Speech Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Kaggle. ## Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.10.3
{"language": "en", "license": "apache-2.0", "tags": ["audio-classification", "generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "wav2vec2-base", "model-index": [{"name": "wav2vec2-adult-child-cls", "results": []}]}
bookbot/wav2vec2-adult-child-cls
null
[ "transformers", "pytorch", "tensorboard", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "en", "arxiv:2006.11477", "base_model:wav2vec2-base", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2006.11477" ]
[ "en" ]
TAGS #transformers #pytorch #tensorboard #safetensors #wav2vec2 #audio-classification #generated_from_trainer #en #arxiv-2006.11477 #base_model-wav2vec2-base #license-apache-2.0 #endpoints_compatible #has_space #region-us
Wav2Vec2 Adult/Child Speech Classifier ====================================== Wav2Vec2 Adult/Child Speech Classifier is an audio classification model based on the wav2vec 2.0 architecture. This model is a fine-tuned version of wav2vec2-base on a private adult/child speech classification dataset. This model was trained using HuggingFace's PyTorch framework. All training was done on a Tesla P100, provided by Kaggle. Training metrics were logged via Tensorboard. Model ----- Evaluation Results ------------------ The model achieves the following results on evaluation: Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * 'learning\_rate': 3e-05 * 'train\_batch\_size': 32 * 'eval\_batch\_size': 32 * 'seed': 42 * 'optimizer': Adam with 'betas=(0.9,0.999)' and 'epsilon=1e-08' * 'lr\_scheduler\_type': linear * 'lr\_scheduler\_warmup\_ratio': 0.1 * 'num\_epochs': 5 ### Training results Disclaimer ---------- Do consider the biases which came from pre-training datasets that may be carried over into the results of this model. Authors ------- Wav2Vec2 Adult/Child Speech Classifier was trained and evaluated by Wilson Wongso. All computation and development are done on Kaggle. Framework versions ------------------ * Transformers 4.16.2 * Pytorch 1.10.2+cu102 * Datasets 1.18.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* 'learning\\_rate': 3e-05\n* 'train\\_batch\\_size': 32\n* 'eval\\_batch\\_size': 32\n* 'seed': 42\n* 'optimizer': Adam with 'betas=(0.9,0.999)' and 'epsilon=1e-08'\n* 'lr\\_scheduler\\_type': linear\n* 'lr\\_scheduler\\_warmup\\_ratio': 0.1\n* 'num\\_epochs': 5", "### Training results\n\n\n\nDisclaimer\n----------\n\n\nDo consider the biases which came from pre-training datasets that may be carried over into the results of this model.\n\n\nAuthors\n-------\n\n\nWav2Vec2 Adult/Child Speech Classifier was trained and evaluated by Wilson Wongso. All computation and development are done on Kaggle.\n\n\nFramework versions\n------------------\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #safetensors #wav2vec2 #audio-classification #generated_from_trainer #en #arxiv-2006.11477 #base_model-wav2vec2-base #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* 'learning\\_rate': 3e-05\n* 'train\\_batch\\_size': 32\n* 'eval\\_batch\\_size': 32\n* 'seed': 42\n* 'optimizer': Adam with 'betas=(0.9,0.999)' and 'epsilon=1e-08'\n* 'lr\\_scheduler\\_type': linear\n* 'lr\\_scheduler\\_warmup\\_ratio': 0.1\n* 'num\\_epochs': 5", "### Training results\n\n\n\nDisclaimer\n----------\n\n\nDo consider the biases which came from pre-training datasets that may be carried over into the results of this model.\n\n\nAuthors\n-------\n\n\nWav2Vec2 Adult/Child Speech Classifier was trained and evaluated by Wilson Wongso. All computation and development are done on Kaggle.\n\n\nFramework versions\n------------------\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
audio-classification
transformers
# Wav2Vec2 XLS-R Adult/Child Speech Classifier Wav2Vec2 XLS-R Adult/Child Speech Classifier is an audio classification model based on the [XLS-R](https://arxiv.org/abs/2111.09296) architecture. This model is a fine-tuned version of [wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on a private adult/child speech classification dataset. This model was trained using HuggingFace's PyTorch framework. All training was done on a Tesla P100, provided by Kaggle. Training metrics were logged via Tensorboard. ## Model | Model | #params | Arch. | Training/Validation data (text) | | -------------------------------- | ------- | ----- | ----------------------------------------- | | `wav2vec2-xls-r-adult-child-cls` | 300M | XLS-R | Adult/Child Speech Classification Dataset | ## Evaluation Results The model achieves the following results on evaluation: | Dataset | Loss | Accuracy | F1 | | --------------------------------- | ------ | -------- | ------ | | Adult/Child Speech Classification | 0.1851 | 94.69% | 0.9508 | ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - `learning_rate`: 3e-05 - `train_batch_size`: 8 - `eval_batch_size`: 8 - `seed`: 42 - `gradient_accumulation_steps`: 4 - `total_train_batch_size`: 32 - `optimizer`: Adam with `betas=(0.9,0.999)` and `epsilon=1e-08` - `lr_scheduler_type`: linear - `lr_scheduler_warmup_ratio`: 0.1 - `num_epochs`: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | | :-----------: | :---: | :--: | :-------------: | :------: | :----: | | 0.2906 | 1.0 | 383 | 0.1856 | 0.9372 | 0.9421 | | 0.1749 | 2.0 | 766 | 0.1925 | 0.9418 | 0.9465 | | 0.1681 | 3.0 | 1149 | 0.1893 | 0.9414 | 0.9459 | | 0.1295 | 4.0 | 1532 | 0.1851 | 0.9469 | 0.9508 | | 0.2031 | 5.0 | 1915 | 0.1944 | 0.9423 | 0.9460 | ## Disclaimer Do consider the biases which came from pre-training datasets that may be carried over into the results of this model. ## Authors Wav2Vec2 XLS-R Adult/Child Speech Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Kaggle. ## Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
{"language": "en", "license": "apache-2.0", "tags": ["audio-classification", "generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "wav2vec2-xls-r-adult-child-cls", "results": []}]}
bookbot/wav2vec2-xls-r-adult-child-cls
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "en", "arxiv:2111.09296", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2111.09296" ]
[ "en" ]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #audio-classification #generated_from_trainer #en #arxiv-2111.09296 #license-apache-2.0 #endpoints_compatible #region-us
Wav2Vec2 XLS-R Adult/Child Speech Classifier ============================================ Wav2Vec2 XLS-R Adult/Child Speech Classifier is an audio classification model based on the XLS-R architecture. This model is a fine-tuned version of wav2vec2-xls-r-300m on a private adult/child speech classification dataset. This model was trained using HuggingFace's PyTorch framework. All training was done on a Tesla P100, provided by Kaggle. Training metrics were logged via Tensorboard. Model ----- Evaluation Results ------------------ The model achieves the following results on evaluation: Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * 'learning\_rate': 3e-05 * 'train\_batch\_size': 8 * 'eval\_batch\_size': 8 * 'seed': 42 * 'gradient\_accumulation\_steps': 4 * 'total\_train\_batch\_size': 32 * 'optimizer': Adam with 'betas=(0.9,0.999)' and 'epsilon=1e-08' * 'lr\_scheduler\_type': linear * 'lr\_scheduler\_warmup\_ratio': 0.1 * 'num\_epochs': 5 ### Training results Disclaimer ---------- Do consider the biases which came from pre-training datasets that may be carried over into the results of this model. Authors ------- Wav2Vec2 XLS-R Adult/Child Speech Classifier was trained and evaluated by Wilson Wongso. All computation and development are done on Kaggle. Framework versions ------------------ * Transformers 4.17.0.dev0 * Pytorch 1.10.2+cu102 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* 'learning\\_rate': 3e-05\n* 'train\\_batch\\_size': 8\n* 'eval\\_batch\\_size': 8\n* 'seed': 42\n* 'gradient\\_accumulation\\_steps': 4\n* 'total\\_train\\_batch\\_size': 32\n* 'optimizer': Adam with 'betas=(0.9,0.999)' and 'epsilon=1e-08'\n* 'lr\\_scheduler\\_type': linear\n* 'lr\\_scheduler\\_warmup\\_ratio': 0.1\n* 'num\\_epochs': 5", "### Training results\n\n\n\nDisclaimer\n----------\n\n\nDo consider the biases which came from pre-training datasets that may be carried over into the results of this model.\n\n\nAuthors\n-------\n\n\nWav2Vec2 XLS-R Adult/Child Speech Classifier was trained and evaluated by Wilson Wongso. All computation and development are done on Kaggle.\n\n\nFramework versions\n------------------\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #audio-classification #generated_from_trainer #en #arxiv-2111.09296 #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* 'learning\\_rate': 3e-05\n* 'train\\_batch\\_size': 8\n* 'eval\\_batch\\_size': 8\n* 'seed': 42\n* 'gradient\\_accumulation\\_steps': 4\n* 'total\\_train\\_batch\\_size': 32\n* 'optimizer': Adam with 'betas=(0.9,0.999)' and 'epsilon=1e-08'\n* 'lr\\_scheduler\\_type': linear\n* 'lr\\_scheduler\\_warmup\\_ratio': 0.1\n* 'num\\_epochs': 5", "### Training results\n\n\n\nDisclaimer\n----------\n\n\nDo consider the biases which came from pre-training datasets that may be carried over into the results of this model.\n\n\nAuthors\n-------\n\n\nWav2Vec2 XLS-R Adult/Child Speech Classifier was trained and evaluated by Wilson Wongso. All computation and development are done on Kaggle.\n\n\nFramework versions\n------------------\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
bookemdan/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "conversational", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #conversational #endpoints_compatible #has_space #region-us
# Harry Potter DialoGPT Model
[ "# Harry Potter DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #conversational #endpoints_compatible #has_space #region-us \n", "# Harry Potter DialoGPT Model" ]
text-generation
transformers
#berk
{"tags": ["conversational"]}
boran/berkbot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#berk
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
null
null
Tokenizer based on `facebook/bart-large-cnn` and trained on captions normalized by [dalle-mini](https://github.com/borisdayma/dalle-mini).
{}
boris/dalle-mini-tokenizer
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #region-us
Tokenizer based on 'facebook/bart-large-cnn' and trained on captions normalized by dalle-mini.
[]
[ "TAGS\n#region-us \n" ]
null
null
## VQGAN-f16-16384 ### Model Description This is a Pytorch Lightning checkpoint of VQGAN, which learns a codebook of context-rich visual parts by leveraging both the use of convolutional methods and transformers. It was introduced in [Taming Transformers for High-Resolution Image Synthesis](https://compvis.github.io/taming-transformers/) ([CVPR paper](https://openaccess.thecvf.com/content/CVPR2021/html/Esser_Taming_Transformers_for_High-Resolution_Image_Synthesis_CVPR_2021_paper.html)). The model allows the encoding of images as a fixed-length sequence of tokens taken from the codebook. This version of the model uses a reduction factor `f=16` and a vocabulary of `13,384` tokens. As an example of how the reduction factor works, images of size `256x256` are encoded to sequences of `256` tokens: `256/16 * 256/16`. Images of `512x512` would result in sequences of `1024` tokens. ### Datasets Used for Training * ImageNet. We didn't train this model from scratch. Instead, we started from [a checkpoint pre-trained on ImageNet](https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887/). * [Conceptual Captions 3M](https://ai.google.com/research/ConceptualCaptions/) (CC3M). * [OpenAI subset of YFCC100M](https://github.com/openai/CLIP/blob/main/data/yfcc100m.md). We fine-tuned on CC3M and YFCC100M to improve the encoding quality of people and faces, which are not very well represented in ImageNet. We used a subset of 2,268,720 images from CC3M and YFCC100M for this purpose. ### Training Process Finetuning was performed in PyTorch using [taming-transformers](https://github.com/CompVis/taming-transformers). The full training process and model preparation includes these steps: * Pre-training on ImageNet. Previously performed. We used [this checkpoint](https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887). * Fine-tuning, [Part 1](https://wandb.ai/wandb/hf-flax-dalle-mini/runs/2021-07-09T15-33-11_dalle_vqgan?workspace=user-borisd13). * Fine-tuning, [Part 2](https://wandb.ai/wandb/hf-flax-dalle-mini/runs/2021-07-09T21-42-07_dalle_vqgan?workspace=user-borisd13) – continuation from Part 1. The final checkpoint has been logged as an artifact in the training run and is the model present in this card. * Conversion to JAX as [`flax-community/vqgan_f16_16384`](https://huggingface.co/flax-community/vqgan_f16_16384). ### How to Use The checkpoint can be loaded using Pytorch-Lightning. Note: `omegaconf==2.0.0` is required for loading the checkpoint. ### Related Models in the Hub * JAX version of VQGAN, trained on the same datasets described here: [`flax-community/vqgan_f16_16384`](https://huggingface.co/flax-community/vqgan_f16_16384). * [DALL·E mini](https://huggingface.co/flax-community/dalle-mini), a Flax/JAX simplified implementation of OpenAI's DALL·E. ### Other This model was successfully used as part of the implementation of [DALL·E mini](https://github.com/borisdayma/dalle-mini). Our [report](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-mini--Vmlldzo4NjIxODA) contains more details on how to leverage it in an image encoding / generation pipeline.
{}
boris/vqgan_f16_16384
null
[ "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #has_space #region-us
## VQGAN-f16-16384 ### Model Description This is a Pytorch Lightning checkpoint of VQGAN, which learns a codebook of context-rich visual parts by leveraging both the use of convolutional methods and transformers. It was introduced in Taming Transformers for High-Resolution Image Synthesis (CVPR paper). The model allows the encoding of images as a fixed-length sequence of tokens taken from the codebook. This version of the model uses a reduction factor 'f=16' and a vocabulary of '13,384' tokens. As an example of how the reduction factor works, images of size '256x256' are encoded to sequences of '256' tokens: '256/16 * 256/16'. Images of '512x512' would result in sequences of '1024' tokens. ### Datasets Used for Training * ImageNet. We didn't train this model from scratch. Instead, we started from a checkpoint pre-trained on ImageNet. * Conceptual Captions 3M (CC3M). * OpenAI subset of YFCC100M. We fine-tuned on CC3M and YFCC100M to improve the encoding quality of people and faces, which are not very well represented in ImageNet. We used a subset of 2,268,720 images from CC3M and YFCC100M for this purpose. ### Training Process Finetuning was performed in PyTorch using taming-transformers. The full training process and model preparation includes these steps: * Pre-training on ImageNet. Previously performed. We used this checkpoint. * Fine-tuning, Part 1. * Fine-tuning, Part 2 – continuation from Part 1. The final checkpoint has been logged as an artifact in the training run and is the model present in this card. * Conversion to JAX as 'flax-community/vqgan_f16_16384'. ### How to Use The checkpoint can be loaded using Pytorch-Lightning. Note: 'omegaconf==2.0.0' is required for loading the checkpoint. ### Related Models in the Hub * JAX version of VQGAN, trained on the same datasets described here: 'flax-community/vqgan_f16_16384'. * DALL·E mini, a Flax/JAX simplified implementation of OpenAI's DALL·E. ### Other This model was successfully used as part of the implementation of DALL·E mini. Our report contains more details on how to leverage it in an image encoding / generation pipeline.
[ "## VQGAN-f16-16384", "### Model Description\n\nThis is a Pytorch Lightning checkpoint of VQGAN, which learns a codebook of context-rich visual parts by leveraging both the use of convolutional methods and transformers. It was introduced in Taming Transformers for High-Resolution Image Synthesis (CVPR paper).\n\nThe model allows the encoding of images as a fixed-length sequence of tokens taken from the codebook.\n\nThis version of the model uses a reduction factor 'f=16' and a vocabulary of '13,384' tokens.\n\nAs an example of how the reduction factor works, images of size '256x256' are encoded to sequences of '256' tokens: '256/16 * 256/16'. Images of '512x512' would result in sequences of '1024' tokens.", "### Datasets Used for Training\n\n* ImageNet. We didn't train this model from scratch. Instead, we started from a checkpoint pre-trained on ImageNet.\n* Conceptual Captions 3M (CC3M).\n* OpenAI subset of YFCC100M.\n\nWe fine-tuned on CC3M and YFCC100M to improve the encoding quality of people and faces, which are not very well represented in ImageNet. We used a subset of 2,268,720 images from CC3M and YFCC100M for this purpose.", "### Training Process\n\nFinetuning was performed in PyTorch using taming-transformers. The full training process and model preparation includes these steps:\n\n* Pre-training on ImageNet. Previously performed. We used this checkpoint.\n* Fine-tuning, Part 1.\n* Fine-tuning, Part 2 – continuation from Part 1. The final checkpoint has been logged as an artifact in the training run and is the model present in this card.\n* Conversion to JAX as 'flax-community/vqgan_f16_16384'.", "### How to Use\n\nThe checkpoint can be loaded using Pytorch-Lightning.\n\nNote: 'omegaconf==2.0.0' is required for loading the checkpoint.", "### Related Models in the Hub\n\n* JAX version of VQGAN, trained on the same datasets described here: 'flax-community/vqgan_f16_16384'.\n* DALL·E mini, a Flax/JAX simplified implementation of OpenAI's DALL·E.", "### Other\n\nThis model was successfully used as part of the implementation of DALL·E mini. Our report contains more details on how to leverage it in an image encoding / generation pipeline." ]
[ "TAGS\n#has_space #region-us \n", "## VQGAN-f16-16384", "### Model Description\n\nThis is a Pytorch Lightning checkpoint of VQGAN, which learns a codebook of context-rich visual parts by leveraging both the use of convolutional methods and transformers. It was introduced in Taming Transformers for High-Resolution Image Synthesis (CVPR paper).\n\nThe model allows the encoding of images as a fixed-length sequence of tokens taken from the codebook.\n\nThis version of the model uses a reduction factor 'f=16' and a vocabulary of '13,384' tokens.\n\nAs an example of how the reduction factor works, images of size '256x256' are encoded to sequences of '256' tokens: '256/16 * 256/16'. Images of '512x512' would result in sequences of '1024' tokens.", "### Datasets Used for Training\n\n* ImageNet. We didn't train this model from scratch. Instead, we started from a checkpoint pre-trained on ImageNet.\n* Conceptual Captions 3M (CC3M).\n* OpenAI subset of YFCC100M.\n\nWe fine-tuned on CC3M and YFCC100M to improve the encoding quality of people and faces, which are not very well represented in ImageNet. We used a subset of 2,268,720 images from CC3M and YFCC100M for this purpose.", "### Training Process\n\nFinetuning was performed in PyTorch using taming-transformers. The full training process and model preparation includes these steps:\n\n* Pre-training on ImageNet. Previously performed. We used this checkpoint.\n* Fine-tuning, Part 1.\n* Fine-tuning, Part 2 – continuation from Part 1. The final checkpoint has been logged as an artifact in the training run and is the model present in this card.\n* Conversion to JAX as 'flax-community/vqgan_f16_16384'.", "### How to Use\n\nThe checkpoint can be loaded using Pytorch-Lightning.\n\nNote: 'omegaconf==2.0.0' is required for loading the checkpoint.", "### Related Models in the Hub\n\n* JAX version of VQGAN, trained on the same datasets described here: 'flax-community/vqgan_f16_16384'.\n* DALL·E mini, a Flax/JAX simplified implementation of OpenAI's DALL·E.", "### Other\n\nThis model was successfully used as part of the implementation of DALL·E mini. Our report contains more details on how to leverage it in an image encoding / generation pipeline." ]
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-English Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on {language} using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "{lang_id}", split="test[:2%]") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site. processor = Wav2Vec2Processor.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic` model = Wav2Vec2ForCTC.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic` resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): \tspeech_array, sampling_rate = torchaudio.load(batch["path"]) \tbatch["speech"] = resampler(speech_array).squeeze().numpy() \treturn batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): \tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset[:2]["sentence"]) ``` ## Evaluation The model can be evaluated as follows on the {language} test data of Common Voice. # TODO: replace #TODO: replace language with your {language}, *e.g.* French ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "{lang_id}", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site. wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic` model = Wav2Vec2ForCTC.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic` model.to("cuda") chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]' # TODO: adapt this list to include all special characters you removed from the data resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): \tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() \tspeech_array, sampling_rate = torchaudio.load(batch["path"]) \tbatch["speech"] = resampler(speech_array).squeeze().numpy() \treturn batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): \tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) \twith torch.no_grad(): \t\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits \tpred_ids = torch.argmax(logits, dim=-1) \tbatch["pred_strings"] = processor.batch_decode(pred_ids) \treturn batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: XX.XX % # TODO: write output of print here. IMPORTANT: Please remember to also replace {wer_result_on_test} at the top of with this value here. tags. ## Training The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO: adapt to state all the datasets that were used for training. The script used for training can be found [here](...) # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here.
{"language": "en", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "English XLSR Wav2Vec2 Large 53 with punctuation", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice en", "type": "common_voice", "args": "en"}, "metrics": [{"type": "wer", "value": 1.0, "name": "Test WER"}]}]}]}
boris/xlsr-en-punctuation
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "en", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #en #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# Wav2Vec2-Large-XLSR-53-English Fine-tuned facebook/wav2vec2-large-xlsr-53 on {language} using the Common Voice. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the {language} test data of Common Voice. # TODO: replace #TODO: replace language with your {language}, *e.g.* French Test Result: XX.XX % # TODO: write output of print here. IMPORTANT: Please remember to also replace {wer_result_on_test} at the top of with this value here. tags. ## Training The Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO: adapt to state all the datasets that were used for training. The script used for training can be found here # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here.
[ "# Wav2Vec2-Large-XLSR-53-English\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on {language} using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the {language} test data of Common Voice. # TODO: replace #TODO: replace language with your {language}, *e.g.* French\n\n\n\n\nTest Result: XX.XX % # TODO: write output of print here. IMPORTANT: Please remember to also replace {wer_result_on_test} at the top of with this value here. tags.", "## Training\n\nThe Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO: adapt to state all the datasets that were used for training.\n\nThe script used for training can be found here # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here." ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #en #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# Wav2Vec2-Large-XLSR-53-English\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on {language} using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the {language} test data of Common Voice. # TODO: replace #TODO: replace language with your {language}, *e.g.* French\n\n\n\n\nTest Result: XX.XX % # TODO: write output of print here. IMPORTANT: Please remember to also replace {wer_result_on_test} at the top of with this value here. tags.", "## Training\n\nThe Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO: adapt to state all the datasets that were used for training.\n\nThe script used for training can be found here # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here." ]
text-classification
transformers
For studying only
{}
bowipawan/bert-sentimental
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us
For studying only
[]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-generation
transformers
# Gollum DialoGPT Model
{"tags": ["conversational"]}
boydster/DialoGPT-small-gollum
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Gollum DialoGPT Model
[ "# Gollum DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Gollum DialoGPT Model" ]
text-classification
transformers
# Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 33199029 - CO2 Emissions (in grams): 3.667033499762825 ## Validation Metrics - Loss: 0.32653310894966125 - Accuracy: 0.9133333333333333 - Precision: 0.9005847953216374 - Recall: 0.9447852760736196 - AUC: 0.9532488468944517 - F1: 0.9221556886227544 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/bozelosp/autonlp-sci-relevance-33199029 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("bozelosp/autonlp-sci-relevance-33199029", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("bozelosp/autonlp-sci-relevance-33199029", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
{"language": "en", "tags": "autonlp", "datasets": ["bozelosp/autonlp-data-sci-relevance"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 3.667033499762825}
world-wide/sent-sci-irrelevance
null
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "en", "dataset:bozelosp/autonlp-data-sci-relevance", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #bert #text-classification #autonlp #en #dataset-bozelosp/autonlp-data-sci-relevance #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
# Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 33199029 - CO2 Emissions (in grams): 3.667033499762825 ## Validation Metrics - Loss: 0.32653310894966125 - Accuracy: 0.9133333333333333 - Precision: 0.9005847953216374 - Recall: 0.9447852760736196 - AUC: 0.9532488468944517 - F1: 0.9221556886227544 ## Usage You can use cURL to access this model: Or Python API:
[ "# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 33199029\n- CO2 Emissions (in grams): 3.667033499762825", "## Validation Metrics\n\n- Loss: 0.32653310894966125\n- Accuracy: 0.9133333333333333\n- Precision: 0.9005847953216374\n- Recall: 0.9447852760736196\n- AUC: 0.9532488468944517\n- F1: 0.9221556886227544", "## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-bozelosp/autonlp-data-sci-relevance #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 33199029\n- CO2 Emissions (in grams): 3.667033499762825", "## Validation Metrics\n\n- Loss: 0.32653310894966125\n- Accuracy: 0.9133333333333333\n- Precision: 0.9005847953216374\n- Recall: 0.9447852760736196\n- AUC: 0.9532488468944517\n- F1: 0.9221556886227544", "## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6434 - Precision: 0.8589 - Recall: 0.8686 - F1: 0.8637 - Accuracy: 0.8324 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.615 | 1.0 | 1741 | 0.6111 | 0.8200 | 0.8652 | 0.8420 | 0.8046 | | 0.4795 | 2.0 | 3482 | 0.5366 | 0.8456 | 0.8803 | 0.8626 | 0.8301 | | 0.3705 | 3.0 | 5223 | 0.5412 | 0.8527 | 0.8786 | 0.8655 | 0.8339 | | 0.2749 | 4.0 | 6964 | 0.5906 | 0.8559 | 0.8711 | 0.8634 | 0.8316 | | 0.2049 | 5.0 | 8705 | 0.6434 | 0.8589 | 0.8686 | 0.8637 | 0.8324 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
{"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-finetuned-ner", "results": []}]}
brad1141/bert-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "longformer", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #longformer #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
bert-finetuned-ner ================== This model is a fine-tuned version of allenai/longformer-base-4096 on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.6434 * Precision: 0.8589 * Recall: 0.8686 * F1: 0.8637 * Accuracy: 0.8324 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 1 * eval\_batch\_size: 1 * seed: 42 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 8 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.17.0 * Pytorch 1.10.0+cu111 * Datasets 1.18.4 * Tokenizers 0.11.6
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.4\n* Tokenizers 0.11.6" ]
[ "TAGS\n#transformers #pytorch #tensorboard #longformer #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.4\n* Tokenizers 0.11.6" ]
null
null
This is a test model
{}
bradyll/bert_finetuning_test_20220210
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #region-us
This is a test model
[]
[ "TAGS\n#region-us \n" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-base-finetuned-ner This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0501 - Precision: 0.9563 - Recall: 0.9652 - F1: 0.9608 - Accuracy: 0.9899 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1419 | 1.0 | 878 | 0.0628 | 0.9290 | 0.9288 | 0.9289 | 0.9835 | | 0.0379 | 2.0 | 1756 | 0.0466 | 0.9456 | 0.9567 | 0.9511 | 0.9878 | | 0.0176 | 3.0 | 2634 | 0.0473 | 0.9539 | 0.9575 | 0.9557 | 0.9890 | | 0.0098 | 4.0 | 3512 | 0.0468 | 0.9570 | 0.9635 | 0.9603 | 0.9896 | | 0.0043 | 5.0 | 4390 | 0.0501 | 0.9563 | 0.9652 | 0.9608 | 0.9899 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "deberta-base-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9563020492186769, "name": "Precision"}, {"type": "recall", "value": 0.9652436720816018, "name": "Recall"}, {"type": "f1", "value": 0.9607520564042303, "name": "F1"}, {"type": "accuracy", "value": 0.9899205302077261, "name": "Accuracy"}]}]}]}
geckos/deberta-base-fine-tuned-ner
null
[ "transformers", "pytorch", "tensorboard", "deberta", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #deberta #token-classification #generated_from_trainer #dataset-conll2003 #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
deberta-base-finetuned-ner ========================== This model is a fine-tuned version of microsoft/deberta-base on the conll2003 dataset. It achieves the following results on the evaluation set: * Loss: 0.0501 * Precision: 0.9563 * Recall: 0.9652 * F1: 0.9608 * Accuracy: 0.9899 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.9.0+cu111 * Datasets 1.12.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.12.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #deberta #token-classification #generated_from_trainer #dataset-conll2003 #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.12.1\n* Tokenizers 0.10.3" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0606 - Precision: 0.9303 - Recall: 0.9380 - F1: 0.9342 - Accuracy: 0.9842 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2459 | 1.0 | 878 | 0.0696 | 0.9117 | 0.9195 | 0.9156 | 0.9808 | | 0.0513 | 2.0 | 1756 | 0.0602 | 0.9223 | 0.9376 | 0.9299 | 0.9835 | | 0.0304 | 3.0 | 2634 | 0.0606 | 0.9303 | 0.9380 | 0.9342 | 0.9842 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9303228669699323, "name": "Precision"}, {"type": "recall", "value": 0.9380243875153821, "name": "Recall"}, {"type": "f1", "value": 0.9341577540106952, "name": "F1"}, {"type": "accuracy", "value": 0.9842407104389407, "name": "Accuracy"}]}]}]}
geckos/distilbert-base-uncased-fine-tuned-ner
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-ner ===================================== This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset. It achieves the following results on the evaluation set: * Loss: 0.0606 * Precision: 0.9303 * Recall: 0.9380 * F1: 0.9342 * Accuracy: 0.9842 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.9.0+cu111 * Datasets 1.12.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.12.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.12.1\n* Tokenizers 0.10.3" ]
null
null
# [models/cnstd](models/cnstd) 存放 [cnstd](https://github.com/breezedeus/cnstd) 中使用的模型。 # [models/cnocr](models/cnocr) 存放 [cnocr](https://github.com/breezedeus/cnocr) 中使用的模型。
{}
breezedeus/cnstd-cnocr-models
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #region-us
# models/cnstd 存放 cnstd 中使用的模型。 # models/cnocr 存放 cnocr 中使用的模型。
[ "# models/cnstd\n存放 cnstd 中使用的模型。", "# models/cnocr\n存放 cnocr 中使用的模型。" ]
[ "TAGS\n#region-us \n", "# models/cnstd\n存放 cnstd 中使用的模型。", "# models/cnocr\n存放 cnocr 中使用的模型。" ]
text-generation
transformers
# RickBot built for [Chai](https://chai.ml/) Make your own [here](https://colab.research.google.com/drive/1o5LxBspm-C28HQvXN-PRQavapDbm5WjG?usp=sharing)
{"tags": ["conversational"]}
brimeggi/testbot2
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# RickBot built for Chai Make your own here
[ "# RickBot built for Chai\nMake your own here" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# RickBot built for Chai\nMake your own here" ]
text-generation
transformers
# My Awesome Model
{"tags": ["conversational"]}
brokentx/newbrokiev2
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# My Awesome Model
[ "# My Awesome Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# My Awesome Model" ]
token-classification
transformers
# docusco-bert ## Model description **docusco-bert** is a fine-tuned BERT model that is ready to use for **token classification**. The model was trained on data sampled from the Corpus of Contemporary American English ([COCA](https://www.english-corpora.org/coca/)) and classifies tokens and token sequences according to a system developed for the [**DocuScope**](https://www.cmu.edu/dietrich/english/research-and-publications/docuscope.html) dictionary-based tagger. Descriptions of the categories are included in a table below. ## About DocuScope DocuScope is a dicitonary-based tagger that has been developed at Carnegie Mellon University by **David Kaufer** and **Suguru Ishizaki** since the early 2000s. Its categories are rhetorical in their orientation (as opposed to part-of-speech tags, for example, which are morphosyntactic). DocuScope has been been used in [a wide variety of studies](https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=docuscope&btnG=). Here, for example, is [a short analysis of King Lear](https://graphics.cs.wisc.edu/WP/vep/2017/02/14/guest-post-data-mining-king-lear/), and here is [a published study of Tweets](https://journals.sagepub.com/doi/full/10.1177/2055207619844865). ## Intended uses & limitations #### How to use The model was trained on data with tags formatted using [IOB](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)), like those used in common tasks like Named Entity Recogition (NER). Thus, you can use this model with a Transformers NER *pipeline*. ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("browndw/docusco-bert") model = AutoModelForTokenClassification.from_pretrained("browndw/docusco-bert") nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Globalization is the process of interaction and integration among people, companies, and governments worldwide." ds_results = nlp(example) print(ds_results) ``` #### Limitations and bias This model is limited by its training dataset of American English texts. Moreover, the current version is trained on only a small subset of the corpus. The goal is to train later versions on more data, which should increase accuracy. ## Training data This model was fine-tuned on data from the Corpus of Contemporary American English ([COCA](https://www.english-corpora.org/coca/)). The training data contain chunks of text randomly sampled of 5 text-types: Academic, Fiction, Magazine, News, and Spoken. Typically, BERT models are trained on sentence segments. However, DocuScope tags can span setences. Thus, data were split into chunks that don't split **B + I** sequences and end with sentence-final punctuation marks (i.e., period, quesiton mark or exclamaiton point). Additionally, the order of the chunks was randomized prior to sampling, and statified sampling was used to provide enough training data for low-frequency caegories. The resulting training data consist of: * 21,460,177 tokens * 15,796,305 chunks The specific counts for each category appear in the following table. Category|Count -|- O|3528038 Syntactic Complexity|2032808 Character|1413771 Description|1224744 Narrative|1159201 Negative|651012 Academic Terms|620932 Interactive|594908 Information Exposition|578228 Positive|463914 Force Stressed|432631 Information Topics|394155 First Person|249744 Metadiscourse Cohesive|240822 Strategic|238255 Public Terms|234213 Reasoning|213775 Information Place|187249 Information States|173146 Information ReportVerbs|119092 Confidence High|112861 Confidence Hedged|110008 Future|96101 Inquiry|94995 Contingent|94860 Information Change|89063 Metadiscourse Interactive|84033 Updates|81424 Citation|71241 Facilitate|50451 Uncertainty|35644 Academic WritingMoves|29352 Information ChangePositive|28475 Responsibility|25362 Citation Authority|22414 Information ChangeNegative|15612 Confidence Low|2876 Citation Hedged|895 -|- Total|15796305 ## Training procedure This model was trained on a single 2.3 GHz Dual-Core Intel Core i5 with recommended hyperparameters from the [original BERT paper](https://arxiv.org/pdf/1810.04805). ## Eval results ### Overall metric|test -|- f1 |.927 accuracy |.943 ### By category category|precision|recall|f1-score|support -|-|-|-|- AcademicTerms|0.91|0.92|0.92|486399 AcademicWritingMoves|0.76|0.82|0.79|20017 Character|0.94|0.95|0.94|1260272 Citation|0.92|0.94|0.93|50812 CitationAuthority|0.86|0.88|0.87|17798 CitationHedged|0.91|0.94|0.92|632 ConfidenceHedged|0.94|0.96|0.95|90393 ConfidenceHigh|0.92|0.94|0.93|113569 ConfidenceLow|0.79|0.81|0.80|2556 Contingent|0.92|0.94|0.93|81366 Description|0.87|0.89|0.88|1098598 Facilitate|0.87|0.90|0.89|41760 FirstPerson|0.96|0.98|0.97|330658 ForceStressed|0.93|0.94|0.93|436188 Future|0.90|0.93|0.92|93365 InformationChange|0.88|0.91|0.89|72813 InformationChangeNegative|0.83|0.85|0.84|12740 InformationChangePositive|0.82|0.86|0.84|22994 InformationExposition|0.94|0.95|0.95|468078 InformationPlace|0.95|0.96|0.96|147688 InformationReportVerbs|0.91|0.93|0.92|95563 InformationStates|0.95|0.95|0.95|139429 InformationTopics|0.90|0.92|0.91|328152 Inquiry|0.85|0.89|0.87|79030 Interactive|0.95|0.96|0.95|602857 MetadiscourseCohesive|0.97|0.98|0.98|195548 MetadiscourseInteractive|0.92|0.94|0.93|73159 Narrative|0.92|0.94|0.93|1023452 Negative|0.88|0.89|0.88|645810 Positive|0.87|0.89|0.88|409775 PublicTerms|0.91|0.92|0.91|184108 Reasoning|0.93|0.95|0.94|169208 Responsibility|0.83|0.87|0.85|21819 Strategic|0.88|0.90|0.89|193768 SyntacticComplexity|0.95|0.96|0.96|1635918 Uncertainty|0.87|0.91|0.89|33684 Updates|0.91|0.93|0.92|77760 -|-|-|-|- micro avg|0.92|0.93|0.93|10757736 macro avg|0.90|0.92|0.91|10757736 weighted avg|0.92|0.93|0.93|10757736 ## DocuScope Category Descriptions Category (Cluster)|Description|Examples -|-|- Academic Terms|Abstract, rare, specialized, or disciplinary-specific terms that are indicative of informationally dense writing|*market price*, *storage capacity*, *regulatory*, *distribution* Academic Writing Moves|Phrases and terms that indicate academic writing moves, which are common in research genres and are derived from the work of Swales (1981) and Cotos et al. (2015, 2017)|*in the first section*, *the problem is that*, *payment methodology*, *point of contention* Character|References multiple dimensions of a character or human being as a social agent, both individual and collective|*Pauline*, *her*, *personnel*, *representatives* Citation|Language that indicates the attribution of information to, or citation of, another source.|*according to*, *is proposing that*, *quotes from* Citation Authorized|Referencing the citation of another source that is represented as true and not arguable|*confirm that*, *provide evidence*, *common sense* Citation Hedged|Referencing the citation of another source that is presented as arguable|*suggest that*, *just one opinion* Confidence Hedged|Referencing language that presents a claim as uncertain|*tends to get*, *maybe*, *it seems that* Confidence High|Referencing language that presents a claim with certainty|*most likely*, *ensure that*, *know that*, *obviously* Confidence Low|Referencing language that presents a claim as extremely unlikely|*unlikely*, *out of the question*, *impossible* Contingent|Referencing contingency, typically contingency in the world, rather than contingency in one's knowledge|*subject to*, *if possible*, *just in case*, *hypothetically* Description|Language that evokes sights, sounds, smells, touches and tastes, as well as scenes and objects|*stay quiet*, *gas-fired*, *solar panels*, *soft*, *on my desk* Facilitate|Language that enables or directs one through specific tasks and actions|*let me*, *worth a try*, *I would suggest* First Person|This cluster captures first person.|*I*, *as soon as I*, *we have been* Force Stressed|Language that is forceful and stressed, often using emphatics, comparative forms, or superlative forms|*really good*, *the sooner the better*, *necessary* Future|Referencing future actions, states, or desires|*will be*, *hope to*, *expected changes* Information Change|Referencing changes of information, particularly changes that are more neutral|*changes*, *revised*, *growth*, *modification to* Information Change Negative|Referencing negative change|*going downhill*, *slow erosion*, *get worse* Information Change Positive|Referencing positive change|*improving*, *accrued interest*, *boost morale* Information Exposition|Information in the form of expository devices, or language that describes or explains, frequently in regards to quantities and comparisons|*final amount*, *several*, *three*, *compare*, *80%* Information Place|Language designating places|*the city*, *surrounding areas*, *Houston*, *home* Information Report Verbs|Informational verbs and verb phrases of reporting|*report*, *posted*, *release*, *point out* Information States|Referencing information states, or states of being|*is*, *are*, *existing*, *been* Information Topics|Referencing topics, usually nominal subjects or objects, that indicate the “aboutness” of a text|*time*, *money*, *stock price*, *phone interview* Inquiry|Referencing inquiry, or language that points to some kind of inquiry or investigation|*find out*, *let me know if you have any questions*, *wondering if* Interactive|Addresses from the author to the reader or from persons in the text to other persons. The address comes in the language of everyday conversation, colloquy, exchange, questions, attention-getters, feedback, interactive genre markers, and the use of the second person.|*can you*, *thank you for*, *please see*, *sounds good to me* Metadiscourse Cohesive|The use of words to build cohesive markers that help the reader navigate the text and signal linkages in the text, which are often additive or contrastive|*or*, *but*, *also*, *on the other hand*, *notwithstanding*, *that being said* Metadiscourse Interactive|The use of words to build cohesive markers that interact with the reader|*I agree*, *let’s talk*, *by the way* Narrative|Language that involves people, description, and events extending in time|*today*, *tomorrow*, *during the*, *this weekend* Negative|Referencing dimensions of negativity, including negative acts, emotions, relations, and values|*does not*, *sorry for*, *problems*, *confusion* Positive|Referencing dimensions of positivity, including actions, emotions, relations, and values|*thanks*, *approval*, *agreement*, *looks good* Public Terms|Referencing public terms, concepts from public language, media, the language of authority, institutions, and responsibility|*discussion*, *amendment*, *corporation*, *authority*, *settlement* Reasoning|Language that has a reasoning focus, supporting inferences about cause, consequence, generalization, concession, and linear inference either from premise to conclusion or conclusion to premise|*because*, *therefore*, *analysis*, *even if*, *as a result*, *indicating that* Responsibility|Referencing the language of responsibility|*supposed to*, *requirements*, *obligations* Strategic|This dimension is active when the text structures strategies activism, advantage-seeking, game-playing cognition, plans, and goal-seeking.|*plan*, *trying to*, *strategy*, *decision*, *coordinate*, *look at the* Syntactic Complexity|The features in this category are often what are called “function words,” like determiners and prepositions.|*the*, *to*, *for*, *in*, *a lot of* Uncertainty|References uncertainty, when confidence levels are unknown|*kind of*, *I have no idea*, *for some reason* Updates|References updates that anticipate someone searching for information and receiving it|*already*, *a new*, *now that*, *here are some* ### BibTeX entry and citation info ``` @incollection{ishizaki2012computer, title = {Computer-aided rhetorical analysis}, author = {Ishizaki, Suguru and Kaufer, David}, booktitle= {Applied natural language processing: Identification, investigation and resolution}, pages = {276--296}, year = {2012}, publisher= {IGI Global}, url = {https://www.igi-global.com/chapter/content/61054} } ``` ``` @article{DBLP:journals/corr/abs-1810-04805, author = {Jacob Devlin and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language Understanding}, journal = {CoRR}, volume = {abs/1810.04805}, year = {2018}, url = {http://arxiv.org/abs/1810.04805}, archivePrefix = {arXiv}, eprint = {1810.04805}, timestamp = {Tue, 30 Oct 2018 20:39:56 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"language": "en", "datasets": "COCA"}
browndw/docusco-bert
null
[ "transformers", "pytorch", "tf", "jax", "bert", "token-classification", "en", "dataset:COCA", "arxiv:1810.04805", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1810.04805" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #bert #token-classification #en #dataset-COCA #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #has_space #region-us
docusco-bert ============ Model description ----------------- docusco-bert is a fine-tuned BERT model that is ready to use for token classification. The model was trained on data sampled from the Corpus of Contemporary American English (COCA) and classifies tokens and token sequences according to a system developed for the DocuScope dictionary-based tagger. Descriptions of the categories are included in a table below. About DocuScope --------------- DocuScope is a dicitonary-based tagger that has been developed at Carnegie Mellon University by David Kaufer and Suguru Ishizaki since the early 2000s. Its categories are rhetorical in their orientation (as opposed to part-of-speech tags, for example, which are morphosyntactic). DocuScope has been been used in a wide variety of studies. Here, for example, is a short analysis of King Lear, and here is a published study of Tweets. Intended uses & limitations --------------------------- #### How to use The model was trained on data with tags formatted using IOB), like those used in common tasks like Named Entity Recogition (NER). Thus, you can use this model with a Transformers NER *pipeline*. #### Limitations and bias This model is limited by its training dataset of American English texts. Moreover, the current version is trained on only a small subset of the corpus. The goal is to train later versions on more data, which should increase accuracy. Training data ------------- This model was fine-tuned on data from the Corpus of Contemporary American English (COCA). The training data contain chunks of text randomly sampled of 5 text-types: Academic, Fiction, Magazine, News, and Spoken. Typically, BERT models are trained on sentence segments. However, DocuScope tags can span setences. Thus, data were split into chunks that don't split B + I sequences and end with sentence-final punctuation marks (i.e., period, quesiton mark or exclamaiton point). Additionally, the order of the chunks was randomized prior to sampling, and statified sampling was used to provide enough training data for low-frequency caegories. The resulting training data consist of: * 21,460,177 tokens * 15,796,305 chunks The specific counts for each category appear in the following table. Training procedure ------------------ This model was trained on a single 2.3 GHz Dual-Core Intel Core i5 with recommended hyperparameters from the original BERT paper. Eval results ------------ ### Overall ### By category DocuScope Category Descriptions ------------------------------- Category (Cluster): Academic Terms, Description: Abstract, rare, specialized, or disciplinary-specific terms that are indicative of informationally dense writing, Examples: *market price*, *storage capacity*, *regulatory*, *distribution* Category (Cluster): Academic Writing Moves, Description: Phrases and terms that indicate academic writing moves, which are common in research genres and are derived from the work of Swales (1981) and Cotos et al. (2015, 2017), Examples: *in the first section*, *the problem is that*, *payment methodology*, *point of contention* Category (Cluster): Character, Description: References multiple dimensions of a character or human being as a social agent, both individual and collective, Examples: *Pauline*, *her*, *personnel*, *representatives* Category (Cluster): Citation, Description: Language that indicates the attribution of information to, or citation of, another source., Examples: *according to*, *is proposing that*, *quotes from* Category (Cluster): Citation Authorized, Description: Referencing the citation of another source that is represented as true and not arguable, Examples: *confirm that*, *provide evidence*, *common sense* Category (Cluster): Citation Hedged, Description: Referencing the citation of another source that is presented as arguable, Examples: *suggest that*, *just one opinion* Category (Cluster): Confidence Hedged, Description: Referencing language that presents a claim as uncertain, Examples: *tends to get*, *maybe*, *it seems that* Category (Cluster): Confidence High, Description: Referencing language that presents a claim with certainty, Examples: *most likely*, *ensure that*, *know that*, *obviously* Category (Cluster): Confidence Low, Description: Referencing language that presents a claim as extremely unlikely, Examples: *unlikely*, *out of the question*, *impossible* Category (Cluster): Contingent, Description: Referencing contingency, typically contingency in the world, rather than contingency in one's knowledge, Examples: *subject to*, *if possible*, *just in case*, *hypothetically* Category (Cluster): Description, Description: Language that evokes sights, sounds, smells, touches and tastes, as well as scenes and objects, Examples: *stay quiet*, *gas-fired*, *solar panels*, *soft*, *on my desk* Category (Cluster): Facilitate, Description: Language that enables or directs one through specific tasks and actions, Examples: *let me*, *worth a try*, *I would suggest* Category (Cluster): First Person, Description: This cluster captures first person., Examples: *I*, *as soon as I*, *we have been* Category (Cluster): Force Stressed, Description: Language that is forceful and stressed, often using emphatics, comparative forms, or superlative forms, Examples: *really good*, *the sooner the better*, *necessary* Category (Cluster): Future, Description: Referencing future actions, states, or desires, Examples: *will be*, *hope to*, *expected changes* Category (Cluster): Information Change, Description: Referencing changes of information, particularly changes that are more neutral, Examples: *changes*, *revised*, *growth*, *modification to* Category (Cluster): Information Change Negative, Description: Referencing negative change, Examples: *going downhill*, *slow erosion*, *get worse* Category (Cluster): Information Change Positive, Description: Referencing positive change, Examples: *improving*, *accrued interest*, *boost morale* Category (Cluster): Information Exposition, Description: Information in the form of expository devices, or language that describes or explains, frequently in regards to quantities and comparisons, Examples: *final amount*, *several*, *three*, *compare*, *80%* Category (Cluster): Information Place, Description: Language designating places, Examples: *the city*, *surrounding areas*, *Houston*, *home* Category (Cluster): Information Report Verbs, Description: Informational verbs and verb phrases of reporting, Examples: *report*, *posted*, *release*, *point out* Category (Cluster): Information States, Description: Referencing information states, or states of being, Examples: *is*, *are*, *existing*, *been* Category (Cluster): Information Topics, Description: Referencing topics, usually nominal subjects or objects, that indicate the “aboutness” of a text, Examples: *time*, *money*, *stock price*, *phone interview* Category (Cluster): Inquiry, Description: Referencing inquiry, or language that points to some kind of inquiry or investigation, Examples: *find out*, *let me know if you have any questions*, *wondering if* Category (Cluster): Interactive, Description: Addresses from the author to the reader or from persons in the text to other persons. The address comes in the language of everyday conversation, colloquy, exchange, questions, attention-getters, feedback, interactive genre markers, and the use of the second person., Examples: *can you*, *thank you for*, *please see*, *sounds good to me* Category (Cluster): Metadiscourse Cohesive, Description: The use of words to build cohesive markers that help the reader navigate the text and signal linkages in the text, which are often additive or contrastive, Examples: *or*, *but*, *also*, *on the other hand*, *notwithstanding*, *that being said* Category (Cluster): Metadiscourse Interactive, Description: The use of words to build cohesive markers that interact with the reader, Examples: *I agree*, *let’s talk*, *by the way* Category (Cluster): Narrative, Description: Language that involves people, description, and events extending in time, Examples: *today*, *tomorrow*, *during the*, *this weekend* Category (Cluster): Negative, Description: Referencing dimensions of negativity, including negative acts, emotions, relations, and values, Examples: *does not*, *sorry for*, *problems*, *confusion* Category (Cluster): Positive, Description: Referencing dimensions of positivity, including actions, emotions, relations, and values, Examples: *thanks*, *approval*, *agreement*, *looks good* Category (Cluster): Public Terms, Description: Referencing public terms, concepts from public language, media, the language of authority, institutions, and responsibility, Examples: *discussion*, *amendment*, *corporation*, *authority*, *settlement* Category (Cluster): Reasoning, Description: Language that has a reasoning focus, supporting inferences about cause, consequence, generalization, concession, and linear inference either from premise to conclusion or conclusion to premise, Examples: *because*, *therefore*, *analysis*, *even if*, *as a result*, *indicating that* Category (Cluster): Responsibility, Description: Referencing the language of responsibility, Examples: *supposed to*, *requirements*, *obligations* Category (Cluster): Strategic, Description: This dimension is active when the text structures strategies activism, advantage-seeking, game-playing cognition, plans, and goal-seeking., Examples: *plan*, *trying to*, *strategy*, *decision*, *coordinate*, *look at the* Category (Cluster): Syntactic Complexity, Description: The features in this category are often what are called “function words,” like determiners and prepositions., Examples: *the*, *to*, *for*, *in*, *a lot of* Category (Cluster): Uncertainty, Description: References uncertainty, when confidence levels are unknown, Examples: *kind of*, *I have no idea*, *for some reason* Category (Cluster): Updates, Description: References updates that anticipate someone searching for information and receiving it, Examples: *already*, *a new*, *now that*, *here are some* ### BibTeX entry and citation info
[ "#### How to use\n\n\nThe model was trained on data with tags formatted using IOB), like those used in common tasks like Named Entity Recogition (NER). Thus, you can use this model with a Transformers NER *pipeline*.", "#### Limitations and bias\n\n\nThis model is limited by its training dataset of American English texts. Moreover, the current version is trained on only a small subset of the corpus. The goal is to train later versions on more data, which should increase accuracy.\n\n\nTraining data\n-------------\n\n\nThis model was fine-tuned on data from the Corpus of Contemporary American English (COCA). The training data contain chunks of text randomly sampled of 5 text-types: Academic, Fiction, Magazine, News, and Spoken.\n\n\nTypically, BERT models are trained on sentence segments. However, DocuScope tags can span setences. Thus, data were split into chunks that don't split B + I sequences and end with sentence-final punctuation marks (i.e., period, quesiton mark or exclamaiton point).\n\n\nAdditionally, the order of the chunks was randomized prior to sampling, and statified sampling was used to provide enough training data for low-frequency caegories. The resulting training data consist of:\n\n\n* 21,460,177 tokens\n* 15,796,305 chunks\n\n\nThe specific counts for each category appear in the following table.\n\n\n\nTraining procedure\n------------------\n\n\nThis model was trained on a single 2.3 GHz Dual-Core Intel Core i5 with recommended hyperparameters from the original BERT paper.\n\n\nEval results\n------------", "### Overall", "### By category\n\n\n\nDocuScope Category Descriptions\n-------------------------------\n\n\nCategory (Cluster): Academic Terms, Description: Abstract, rare, specialized, or disciplinary-specific terms that are indicative of informationally dense writing, Examples: *market price*, *storage capacity*, *regulatory*, *distribution*\nCategory (Cluster): Academic Writing Moves, Description: Phrases and terms that indicate academic writing moves, which are common in research genres and are derived from the work of Swales (1981) and Cotos et al. (2015, 2017), Examples: *in the first section*, *the problem is that*, *payment methodology*, *point of contention*\nCategory (Cluster): Character, Description: References multiple dimensions of a character or human being as a social agent, both individual and collective, Examples: *Pauline*, *her*, *personnel*, *representatives*\nCategory (Cluster): Citation, Description: Language that indicates the attribution of information to, or citation of, another source., Examples: *according to*, *is proposing that*, *quotes from*\nCategory (Cluster): Citation Authorized, Description: Referencing the citation of another source that is represented as true and not arguable, Examples: *confirm that*, *provide evidence*, *common sense*\nCategory (Cluster): Citation Hedged, Description: Referencing the citation of another source that is presented as arguable, Examples: *suggest that*, *just one opinion*\nCategory (Cluster): Confidence Hedged, Description: Referencing language that presents a claim as uncertain, Examples: *tends to get*, *maybe*, *it seems that*\nCategory (Cluster): Confidence High, Description: Referencing language that presents a claim with certainty, Examples: *most likely*, *ensure that*, *know that*, *obviously*\nCategory (Cluster): Confidence Low, Description: Referencing language that presents a claim as extremely unlikely, Examples: *unlikely*, *out of the question*, *impossible*\nCategory (Cluster): Contingent, Description: Referencing contingency, typically contingency in the world, rather than contingency in one's knowledge, Examples: *subject to*, *if possible*, *just in case*, *hypothetically*\nCategory (Cluster): Description, Description: Language that evokes sights, sounds, smells, touches and tastes, as well as scenes and objects, Examples: *stay quiet*, *gas-fired*, *solar panels*, *soft*, *on my desk*\nCategory (Cluster): Facilitate, Description: Language that enables or directs one through specific tasks and actions, Examples: *let me*, *worth a try*, *I would suggest*\nCategory (Cluster): First Person, Description: This cluster captures first person., Examples: *I*, *as soon as I*, *we have been*\nCategory (Cluster): Force Stressed, Description: Language that is forceful and stressed, often using emphatics, comparative forms, or superlative forms, Examples: *really good*, *the sooner the better*, *necessary*\nCategory (Cluster): Future, Description: Referencing future actions, states, or desires, Examples: *will be*, *hope to*, *expected changes*\nCategory (Cluster): Information Change, Description: Referencing changes of information, particularly changes that are more neutral, Examples: *changes*, *revised*, *growth*, *modification to*\nCategory (Cluster): Information Change Negative, Description: Referencing negative change, Examples: *going downhill*, *slow erosion*, *get worse*\nCategory (Cluster): Information Change Positive, Description: Referencing positive change, Examples: *improving*, *accrued interest*, *boost morale*\nCategory (Cluster): Information Exposition, Description: Information in the form of expository devices, or language that describes or explains, frequently in regards to quantities and comparisons, Examples: *final amount*, *several*, *three*, *compare*, *80%*\nCategory (Cluster): Information Place, Description: Language designating places, Examples: *the city*, *surrounding areas*, *Houston*, *home*\nCategory (Cluster): Information Report Verbs, Description: Informational verbs and verb phrases of reporting, Examples: *report*, *posted*, *release*, *point out*\nCategory (Cluster): Information States, Description: Referencing information states, or states of being, Examples: *is*, *are*, *existing*, *been*\nCategory (Cluster): Information Topics, Description: Referencing topics, usually nominal subjects or objects, that indicate the “aboutness” of a text, Examples: *time*, *money*, *stock price*, *phone interview*\nCategory (Cluster): Inquiry, Description: Referencing inquiry, or language that points to some kind of inquiry or investigation, Examples: *find out*, *let me know if you have any questions*, *wondering if*\nCategory (Cluster): Interactive, Description: Addresses from the author to the reader or from persons in the text to other persons. The address comes in the language of everyday conversation, colloquy, exchange, questions, attention-getters, feedback, interactive genre markers, and the use of the second person., Examples: *can you*, *thank you for*, *please see*, *sounds good to me*\nCategory (Cluster): Metadiscourse Cohesive, Description: The use of words to build cohesive markers that help the reader navigate the text and signal linkages in the text, which are often additive or contrastive, Examples: *or*, *but*, *also*, *on the other hand*, *notwithstanding*, *that being said*\nCategory (Cluster): Metadiscourse Interactive, Description: The use of words to build cohesive markers that interact with the reader, Examples: *I agree*, *let’s talk*, *by the way*\nCategory (Cluster): Narrative, Description: Language that involves people, description, and events extending in time, Examples: *today*, *tomorrow*, *during the*, *this weekend*\nCategory (Cluster): Negative, Description: Referencing dimensions of negativity, including negative acts, emotions, relations, and values, Examples: *does not*, *sorry for*, *problems*, *confusion*\nCategory (Cluster): Positive, Description: Referencing dimensions of positivity, including actions, emotions, relations, and values, Examples: *thanks*, *approval*, *agreement*, *looks good*\nCategory (Cluster): Public Terms, Description: Referencing public terms, concepts from public language, media, the language of authority, institutions, and responsibility, Examples: *discussion*, *amendment*, *corporation*, *authority*, *settlement*\nCategory (Cluster): Reasoning, Description: Language that has a reasoning focus, supporting inferences about cause, consequence, generalization, concession, and linear inference either from premise to conclusion or conclusion to premise, Examples: *because*, *therefore*, *analysis*, *even if*, *as a result*, *indicating that*\nCategory (Cluster): Responsibility, Description: Referencing the language of responsibility, Examples: *supposed to*, *requirements*, *obligations*\nCategory (Cluster): Strategic, Description: This dimension is active when the text structures strategies activism, advantage-seeking, game-playing cognition, plans, and goal-seeking., Examples: *plan*, *trying to*, *strategy*, *decision*, *coordinate*, *look at the*\nCategory (Cluster): Syntactic Complexity, Description: The features in this category are often what are called “function words,” like determiners and prepositions., Examples: *the*, *to*, *for*, *in*, *a lot of*\nCategory (Cluster): Uncertainty, Description: References uncertainty, when confidence levels are unknown, Examples: *kind of*, *I have no idea*, *for some reason*\nCategory (Cluster): Updates, Description: References updates that anticipate someone searching for information and receiving it, Examples: *already*, *a new*, *now that*, *here are some*", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #jax #bert #token-classification #en #dataset-COCA #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "#### How to use\n\n\nThe model was trained on data with tags formatted using IOB), like those used in common tasks like Named Entity Recogition (NER). Thus, you can use this model with a Transformers NER *pipeline*.", "#### Limitations and bias\n\n\nThis model is limited by its training dataset of American English texts. Moreover, the current version is trained on only a small subset of the corpus. The goal is to train later versions on more data, which should increase accuracy.\n\n\nTraining data\n-------------\n\n\nThis model was fine-tuned on data from the Corpus of Contemporary American English (COCA). The training data contain chunks of text randomly sampled of 5 text-types: Academic, Fiction, Magazine, News, and Spoken.\n\n\nTypically, BERT models are trained on sentence segments. However, DocuScope tags can span setences. Thus, data were split into chunks that don't split B + I sequences and end with sentence-final punctuation marks (i.e., period, quesiton mark or exclamaiton point).\n\n\nAdditionally, the order of the chunks was randomized prior to sampling, and statified sampling was used to provide enough training data for low-frequency caegories. The resulting training data consist of:\n\n\n* 21,460,177 tokens\n* 15,796,305 chunks\n\n\nThe specific counts for each category appear in the following table.\n\n\n\nTraining procedure\n------------------\n\n\nThis model was trained on a single 2.3 GHz Dual-Core Intel Core i5 with recommended hyperparameters from the original BERT paper.\n\n\nEval results\n------------", "### Overall", "### By category\n\n\n\nDocuScope Category Descriptions\n-------------------------------\n\n\nCategory (Cluster): Academic Terms, Description: Abstract, rare, specialized, or disciplinary-specific terms that are indicative of informationally dense writing, Examples: *market price*, *storage capacity*, *regulatory*, *distribution*\nCategory (Cluster): Academic Writing Moves, Description: Phrases and terms that indicate academic writing moves, which are common in research genres and are derived from the work of Swales (1981) and Cotos et al. (2015, 2017), Examples: *in the first section*, *the problem is that*, *payment methodology*, *point of contention*\nCategory (Cluster): Character, Description: References multiple dimensions of a character or human being as a social agent, both individual and collective, Examples: *Pauline*, *her*, *personnel*, *representatives*\nCategory (Cluster): Citation, Description: Language that indicates the attribution of information to, or citation of, another source., Examples: *according to*, *is proposing that*, *quotes from*\nCategory (Cluster): Citation Authorized, Description: Referencing the citation of another source that is represented as true and not arguable, Examples: *confirm that*, *provide evidence*, *common sense*\nCategory (Cluster): Citation Hedged, Description: Referencing the citation of another source that is presented as arguable, Examples: *suggest that*, *just one opinion*\nCategory (Cluster): Confidence Hedged, Description: Referencing language that presents a claim as uncertain, Examples: *tends to get*, *maybe*, *it seems that*\nCategory (Cluster): Confidence High, Description: Referencing language that presents a claim with certainty, Examples: *most likely*, *ensure that*, *know that*, *obviously*\nCategory (Cluster): Confidence Low, Description: Referencing language that presents a claim as extremely unlikely, Examples: *unlikely*, *out of the question*, *impossible*\nCategory (Cluster): Contingent, Description: Referencing contingency, typically contingency in the world, rather than contingency in one's knowledge, Examples: *subject to*, *if possible*, *just in case*, *hypothetically*\nCategory (Cluster): Description, Description: Language that evokes sights, sounds, smells, touches and tastes, as well as scenes and objects, Examples: *stay quiet*, *gas-fired*, *solar panels*, *soft*, *on my desk*\nCategory (Cluster): Facilitate, Description: Language that enables or directs one through specific tasks and actions, Examples: *let me*, *worth a try*, *I would suggest*\nCategory (Cluster): First Person, Description: This cluster captures first person., Examples: *I*, *as soon as I*, *we have been*\nCategory (Cluster): Force Stressed, Description: Language that is forceful and stressed, often using emphatics, comparative forms, or superlative forms, Examples: *really good*, *the sooner the better*, *necessary*\nCategory (Cluster): Future, Description: Referencing future actions, states, or desires, Examples: *will be*, *hope to*, *expected changes*\nCategory (Cluster): Information Change, Description: Referencing changes of information, particularly changes that are more neutral, Examples: *changes*, *revised*, *growth*, *modification to*\nCategory (Cluster): Information Change Negative, Description: Referencing negative change, Examples: *going downhill*, *slow erosion*, *get worse*\nCategory (Cluster): Information Change Positive, Description: Referencing positive change, Examples: *improving*, *accrued interest*, *boost morale*\nCategory (Cluster): Information Exposition, Description: Information in the form of expository devices, or language that describes or explains, frequently in regards to quantities and comparisons, Examples: *final amount*, *several*, *three*, *compare*, *80%*\nCategory (Cluster): Information Place, Description: Language designating places, Examples: *the city*, *surrounding areas*, *Houston*, *home*\nCategory (Cluster): Information Report Verbs, Description: Informational verbs and verb phrases of reporting, Examples: *report*, *posted*, *release*, *point out*\nCategory (Cluster): Information States, Description: Referencing information states, or states of being, Examples: *is*, *are*, *existing*, *been*\nCategory (Cluster): Information Topics, Description: Referencing topics, usually nominal subjects or objects, that indicate the “aboutness” of a text, Examples: *time*, *money*, *stock price*, *phone interview*\nCategory (Cluster): Inquiry, Description: Referencing inquiry, or language that points to some kind of inquiry or investigation, Examples: *find out*, *let me know if you have any questions*, *wondering if*\nCategory (Cluster): Interactive, Description: Addresses from the author to the reader or from persons in the text to other persons. The address comes in the language of everyday conversation, colloquy, exchange, questions, attention-getters, feedback, interactive genre markers, and the use of the second person., Examples: *can you*, *thank you for*, *please see*, *sounds good to me*\nCategory (Cluster): Metadiscourse Cohesive, Description: The use of words to build cohesive markers that help the reader navigate the text and signal linkages in the text, which are often additive or contrastive, Examples: *or*, *but*, *also*, *on the other hand*, *notwithstanding*, *that being said*\nCategory (Cluster): Metadiscourse Interactive, Description: The use of words to build cohesive markers that interact with the reader, Examples: *I agree*, *let’s talk*, *by the way*\nCategory (Cluster): Narrative, Description: Language that involves people, description, and events extending in time, Examples: *today*, *tomorrow*, *during the*, *this weekend*\nCategory (Cluster): Negative, Description: Referencing dimensions of negativity, including negative acts, emotions, relations, and values, Examples: *does not*, *sorry for*, *problems*, *confusion*\nCategory (Cluster): Positive, Description: Referencing dimensions of positivity, including actions, emotions, relations, and values, Examples: *thanks*, *approval*, *agreement*, *looks good*\nCategory (Cluster): Public Terms, Description: Referencing public terms, concepts from public language, media, the language of authority, institutions, and responsibility, Examples: *discussion*, *amendment*, *corporation*, *authority*, *settlement*\nCategory (Cluster): Reasoning, Description: Language that has a reasoning focus, supporting inferences about cause, consequence, generalization, concession, and linear inference either from premise to conclusion or conclusion to premise, Examples: *because*, *therefore*, *analysis*, *even if*, *as a result*, *indicating that*\nCategory (Cluster): Responsibility, Description: Referencing the language of responsibility, Examples: *supposed to*, *requirements*, *obligations*\nCategory (Cluster): Strategic, Description: This dimension is active when the text structures strategies activism, advantage-seeking, game-playing cognition, plans, and goal-seeking., Examples: *plan*, *trying to*, *strategy*, *decision*, *coordinate*, *look at the*\nCategory (Cluster): Syntactic Complexity, Description: The features in this category are often what are called “function words,” like determiners and prepositions., Examples: *the*, *to*, *for*, *in*, *a lot of*\nCategory (Cluster): Uncertainty, Description: References uncertainty, when confidence levels are unknown, Examples: *kind of*, *I have no idea*, *for some reason*\nCategory (Cluster): Updates, Description: References updates that anticipate someone searching for information and receiving it, Examples: *already*, *a new*, *now that*, *here are some*", "### BibTeX entry and citation info" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # biobertpt-all-finetuned-ner This model is a fine-tuned version of [pucpr/biobertpt-all](https://huggingface.co/pucpr/biobertpt-all) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3721 - Precision: 0.0179 - Recall: 0.0149 - F1: 0.0163 - Accuracy: 0.6790 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 1 | 2.7864 | 0.0091 | 0.0448 | 0.0152 | 0.3339 | | No log | 2.0 | 2 | 2.5096 | 0.0097 | 0.0149 | 0.0118 | 0.6292 | | No log | 3.0 | 3 | 2.3721 | 0.0179 | 0.0149 | 0.0163 | 0.6790 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu102 - Datasets 1.13.3 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "biobertpt-all-finetuned-ner", "results": []}]}
brunodorneles/biobertpt-all-finetuned-ner
null
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
biobertpt-all-finetuned-ner =========================== This model is a fine-tuned version of pucpr/biobertpt-all on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 2.3721 * Precision: 0.0179 * Recall: 0.0149 * F1: 0.0163 * Accuracy: 0.6790 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.12.0.dev0 * Pytorch 1.9.1+cu102 * Datasets 1.13.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.0.dev0\n* Pytorch 1.9.1+cu102\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #bert #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.0.dev0\n* Pytorch 1.9.1+cu102\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4779 - Wer: 0.3453 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4307 | 4.0 | 500 | 1.4129 | 0.9980 | | 0.626 | 8.0 | 1000 | 0.4605 | 0.4499 | | 0.2199 | 12.0 | 1500 | 0.4457 | 0.3898 | | 0.1303 | 16.0 | 2000 | 0.4418 | 0.3771 | | 0.0851 | 20.0 | 2500 | 0.4647 | 0.3548 | | 0.0604 | 24.0 | 3000 | 0.4603 | 0.3499 | | 0.0461 | 28.0 | 3500 | 0.4779 | 0.3453 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]}
bryan6aero/wav2vec2-base-timit-demo-colab
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-base-timit-demo-colab ============================== This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.4779 * Wer: 0.3453 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 32 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 1000 * num\_epochs: 30 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
text-generation
transformers
# Work In Progress # How to use? To generate text with HTML, the sentence must start with ` htmlOn |||` (note the space at the beginning 😉). To generate normal text, you don't need to add anything. # Training details We continued the pre-training of [gpt2](https://huggingface.co/gpt2). Dataset:[Natural_Questions_HTML_reduced_all](https://huggingface.co/datasets/SaulLu/Natural_Questions_HTML_reduced_all) 50% of the examples in the training data contained all HTML tags with only their `id` and `class` attributes. 50% of the examples were just plain text. Training example with metadata: ``` htmlOn ||| <li id:n-sitesupport><a>Donate to Wikipedia</a></li> <li id:n-shoplink><a>Wikipedia store</a></li> </ul></div></div><div class:portal id:p-interaction><h3 id:p-interaction-label>Interaction</h3> <div class:body><ul><li id:n-help><a>Help</a></li> <li id:n-aboutsite><a>About Wikipedia</a></li> <li id:n-portal><a>Community portal</a></li> <li id:n-recentchanges><a>Recent changes</a></li> <li id:n-contactpage><a>Contact page</a></li> </ul></div></div><div class:portal id:p-tb><h3 id:p-tb-label>Tools</h3> <div class:body><ul><li id:t-whatlinkshere><a>What links here</a></li> <li id:t-recentchangeslinked><a>Related changes</a></li> <li id:t-upload><a>Upload file</a></li> <li id:t-specialpages><a>Special pages</a></li> <li id:t-permalink><a>Permanent link</a></li> <li id:t-info><a>Page information</a></li> <li id:t-wikibase><a>Wikidata item</a></li> <li id:t-cite><a>Cite this page</a></li> </ul></div></div><div class:portal id:p-coll-print_export><h3 id:p-coll-print_export-label>Print/export</h3> <div class:body><ul><li id:coll-create_a_book><a>Create a book</a></li> <li id:coll-download-as-rdf2latex><a>Download as PDF</a></li> <li id:t-print><a>Printable version</a></li> </ul></div></div><div class:portal id:p-lang><h3 id:p-lang-label>Languages</h3> <div class:body><ul><li class:interlanguage-link interwiki-ca><a class:interlanguage-link-target>Català</a></li> <li class:interlanguage-link interwiki-da><a class:interlanguage-link-target>Dansk</a></li> <li class:interlanguage-link interwiki-de><a class:interlanguage-link-target>Deutsch</a></li> <li class:interlanguage-link interwiki-es><a class:interlanguage-link-target>Español</a></li> <li class:interlanguage-link interwiki-eu><a class:interlanguage-link-target>Euskara</a></li> <li class:interlanguage-link interwiki-fa><a class:interlanguage-link-target>فارسی</a></li> <li class:interlanguage-link interwiki-fr><a class:interlanguage-link-target>Français</a></li> <li class:interlanguage-link interwiki-id><a class:interlanguage-link-target>Bahasa Indonesia</a></li> <li class:interlanguage-link interwiki-nl><a class:interlanguage-link-target>Nederlands</a></li> <li class:interlanguage-link interwiki-pt><a class:interlanguage-link-target>Português</a></li> <li class:interlanguage-link interwiki-fi><a class:interlanguage-link-target>Suomi</a></li> <li class:interlanguage-link interwiki-vi><a class:interlanguage-link-target>Tiếng Việt</a></li> <button class:mw-interlanguage-selector mw-ui-button>5 more</button> </ul><div class:after-portlet after-portlet-lang><span class:wb-langlinks-edit wb-langlinks-link><a class:wbc-editpage>Edit links</a></span></div> </div></div></ ```
{"widget": [{"text": " htmlOn ||| <div"}]}
bs-modeling-metadata/html-metadata-exp1-subexp1-1857108
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Work In Progress # How to use? To generate text with HTML, the sentence must start with ' htmlOn |||' (note the space at the beginning ). To generate normal text, you don't need to add anything. # Training details We continued the pre-training of gpt2. Dataset:Natural_Questions_HTML_reduced_all 50% of the examples in the training data contained all HTML tags with only their 'id' and 'class' attributes. 50% of the examples were just plain text. Training example with metadata:
[ "# Work In Progress", "# How to use?\n\nTo generate text with HTML, the sentence must start with ' htmlOn |||' (note the space at the beginning ). To generate normal text, you don't need to add anything.", "# Training details\n\nWe continued the pre-training of gpt2.\n\nDataset:Natural_Questions_HTML_reduced_all\n50% of the examples in the training data contained all HTML tags with only their 'id' and 'class' attributes. 50% of the examples were just plain text.\n\nTraining example with metadata:" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Work In Progress", "# How to use?\n\nTo generate text with HTML, the sentence must start with ' htmlOn |||' (note the space at the beginning ). To generate normal text, you don't need to add anything.", "# Training details\n\nWe continued the pre-training of gpt2.\n\nDataset:Natural_Questions_HTML_reduced_all\n50% of the examples in the training data contained all HTML tags with only their 'id' and 'class' attributes. 50% of the examples were just plain text.\n\nTraining example with metadata:" ]
text-generation
transformers
# Work In Progress # How to use? This model can only generate regular text. # Training details We continued the pre-training of [gpt2](https://huggingface.co/gpt2). Dataset:[Natural_Questions_HTML_reduced_all](https://huggingface.co/datasets/SaulLu/Natural_Questions_HTML_reduced_all) 100% of the examples were just plain text. Training example: ``` start up firms to succeed.[4] Firms like power companies, cable television companies and wireless communication companies with large start up costs fall within this category. A company wishing to enter such industries must have the financial ability to spend millions of dollars before starting operations and generating any revenue.[5] Similarly established firms also have a competitive advantage over new firms. An established firm threatened by a new competitor can lower prices to drive out the competition. Microsoft is a firm that has substantial pricing or market power due to technological superiority in its design and production processes.[4] Finally government created barriers to entry can be a source of market power. A prime example are patents granted to pharmaceutical companies. These patents give the drug companies a virtual monopoly in the protected product for the term of the patent. Measurement[edit] Concentration ratios are the most common measures of market power.[6] The four-firm concentration ratio measures the percentage of total industry output attributable to the top four companies. For monopolies the four firm ratio is 100 per cent while the ratio is zero for perfect competition.[7] The four firm concentration domestic (U.S) ratios for cigarettes is 93%; for automobiles, 84% and for beer, 85%.[8] Another measure of concentration is the Herfindahl-Hirschman Index (HHI) which is calculated by "summing the squares of the percentage market shares of all participants in the market".[8] The HHI index for perfect competition is zero; for monopoly, 10,000. U.S. courts almost never consider a firm to possess market power if it has a market share of less than 50 percent.[9] Elasticity of demand[edit] Market power is the ability to raise price above marginal cost (MC) and earn a positive profit.[10] The degree to which a firm can raise price (P) above marginal cost depends on the shape of the demand curve at the profit maximizing output.[10] That is, elasticity is the critical factor in determining market power. The relationship between market power and the price elasticity of demand (PED) can be summarized by the equation: P M C = P E D 1 + P E D. {\displaystyle {\frac {P}{MC}}={\frac {PED}{1+PED}}.} Note that PED will be negative, so the ratio is always greater than one. The higher the P/MC ratio, the more market power the firm possesses. As PED increases in magnitude, the P/MC ratio approaches one, and market power approaches zero.[11] The equation is derived from the monopolist pricing rule: P − M C P = − 1 P E D. {\displaystyle {\frac {P-MC}{P}}=-{\frac {1}{PED}}.} Nobel Memorial Prize[edit] Jean Tirole was awarded the 2014 Nobel Memorial Prize in Economic Sciences for his analysis of market power and economic regulation. See also[edit] Bargaining power Imperfect competition Market concentration Natural monopoly Predatory pricing Price discrimination Dominance (economics) References[edit] Jump up ^ Vatiero Massimiliano (2010). "The Ordoliberal notion of market power: an institutionalist reassessment". European Competition Journal. 6 (3): 689–707. doi:10.5235/ecj.v6n3.689. Jump up ^ Vatiero M. (2009), "An Institutionalist Explanation of Market Dominances". World Competition. Law and Economics Review, 32(2):221–226. Jump up ^ If the power company raised rates the customer either pays the increase or does without power. ^ Jump up to: a b c d e Krugman & Wells, Microeconomics 2d ed. (Worth 2009) Jump up ^ Often such natural monopolies will also have the benefit of government granted monopolies. Jump up ^ Samuelson & Nordhaus, Microeconomics, 17th ed. (McGraw-Hill 2001) at 183–184. Jump up ^ Samuelson & Nordhaus, Microeconomics, 17th ed. (McGraw-Hill 2001) at 183. ^ Jump up to: a b Samuelson & Nordhaus, Microeconomics, 17th ed. (McGraw-Hill 2001) at 184. Jump up ^ J. Gregory Sidak & Hal J. Singer, Überregulation Without Economics: The World Trade Organization’s Decision in the U.S.-Mexico Arbitration on Telecommunications Services, General Agreement on Trade in Services, GATS, 57 FED. COMM. L.J. 1, 34 (2004), http://www.repository.law.indiana.edu/cgi/viewcontent.cgi?article=1388&context=fclj. ^ Jump up to: a b ```
{}
bs-modeling-metadata/html-metadata-exp1-subexp2-1929863
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Work In Progress # How to use? This model can only generate regular text. # Training details We continued the pre-training of gpt2. Dataset:Natural_Questions_HTML_reduced_all 100% of the examples were just plain text. Training example:
[ "# Work In Progress", "# How to use?\n\nThis model can only generate regular text.", "# Training details\n\nWe continued the pre-training of gpt2.\n\nDataset:Natural_Questions_HTML_reduced_all\n\n100% of the examples were just plain text.\n\nTraining example:" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Work In Progress", "# How to use?\n\nThis model can only generate regular text.", "# Training details\n\nWe continued the pre-training of gpt2.\n\nDataset:Natural_Questions_HTML_reduced_all\n\n100% of the examples were just plain text.\n\nTraining example:" ]
text-generation
transformers
# Work In Progress # How to use? To generate text with HTML, the sentence must start with ` htmlOn |||` (note the space at the beginning 😉). To generate normal text, you don't need to add anything. # Training details We continued the pre-training of [gpt2](https://huggingface.co/gpt2). Dataset:[Natural_Questions_HTML_reduced_all](https://huggingface.co/datasets/SaulLu/Natural_Questions_HTML_reduced_all) 50% of the examples in the training data contained `h1`, `h2`, ..., `h6` and `p` HTML tags with only their `id` and `class` attributes. 50% of the examples were just plain text. Training example with metadata: ``` htmlOn ||| <h1 id:firstHeading class:firstHeading>Market power</h1> From Wikipedia, the free encyclopedia Jump to: navigation, search Competition law Basic concepts History of competition law Monopoly Coercive monopoly Natural monopoly Barriers to entry Herfindahl–Hirschman Index Market concentration Market power SSNIP test Relevant market Merger control Anti-competitive practices Monopolization Collusion Formation of cartels Price fixing Bid rigging Product bundling and tying Refusal to deal Group boycott Essential facilities Exclusive dealing Dividing territories Conscious parallelism Predatory pricing Misuse of patents and copyrights Enforcement authorities and organizations International Competition Network List of competition regulators v t e <p>In economics and particularly in industrial organization, market power is the ability of a firm to profitably raise the market price of a good or service over marginal cost. In perfectly competitive markets, market participants have no market power. A firm with total market power can raise prices without losing any customers to competitors. Market participants that have market power are therefore sometimes referred to as "price makers" or "price setters", while those without are sometimes called "price takers". Significant market power occurs when prices exceed marginal cost and long run average cost, so the firm makes profit.</p> <p>A firm with market power has the ability to individually affect either the total quantity or the prevailing price in the market. Price makers face a downward-sloping demand curve, such that price increases lead to a lower quantity demanded. The decrease in supply as a result of the exercise of market power creates an economic deadweight loss which is often viewed as socially undesirable. As a result, many countries have anti-trust or other legislation intended to limit the ability of firms to accrue market power. Such legislation often regulates mergers and sometimes introduces a judicial power to compel divestiture.</p> <p>A firm usually has market power by virtue of controlling a large portion of the market. In extreme cases—monopoly and monopsony—the firm controls the entire market. However, market size alone is not the only indicator of market power. Highly concentrated markets may be contestable if there are no barriers to entry or exit, limiting the incumbent firm's ability to raise its price above competitive levels.</p> <p>Market power gives firms the ability to engage in unilateral anti-competitive behavior.[1] Some of the behaviours that firms with market power are accused of engaging in include predatory pricing, product tying, and creation of overcapacity or other barriers to entry. If no individual participant in the market has significant market power, then anti-competitive behavior can take place only through collusion, or the exercise of a group of participants' collective market power.</p> <p>The Lerner index and Herfindahl index may be used to measure market power.</p> <p></p><h2>Contents</h2> [hide] 1 Oligopoly 2 Monopoly power 3 Source 4 Measurement 5 Elasticity of demand 6 Nobel Memorial Prize 7 See also 8 References 9 Further references <p></p><h2>Oligopoly[edit]</h2> <p>When several firms control a significant share of market sales, the resulting market structure is called an oligopoly or oligopsony. An oligopoly may engage in collusion, either tacit or overt, and thereby exercise market power. A group of firms that explicitly agree to affect market price or output is called a cartel.</p> <h2>Monopoly power[edit]</h2> <p>Monopoly power is an example of market failure which occurs when one or more of the participants has the ability to influence the price or other outcomes in some general or specialized market. The most commonly discussed form of market power is that of a monopoly, but other forms such as monopsony, and more moderate versions of these two extremes, exist.</p> <p>A well-known example of monopolistic market power is Microsoft's market share in PC operating systems. The United States v. Microsoft case dealt with an allegation that Microsoft illegally exercised its market power by bundling its web browser with its operating system. In this respect, the notion of dominance and dominant position in EU Antitrust Law is a strictly related aspect.[2]</p> <h2>Source[edit]</h2> <p>A monopoly can raise prices and retain customers because the monopoly has no competitors. If a customer has no other place to go to obtain the goods or services, they either pay the increased price or do without.[3] Thus the key to market power is to preclude competition through high barriers of entry. Barriers to entry that are significant sources ```
{"widget": [{"text": " htmlOn ||| <h1"}]}
bs-modeling-metadata/html-metadata-exp1-subexp3-1898197
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Work In Progress # How to use? To generate text with HTML, the sentence must start with ' htmlOn |||' (note the space at the beginning ). To generate normal text, you don't need to add anything. # Training details We continued the pre-training of gpt2. Dataset:Natural_Questions_HTML_reduced_all 50% of the examples in the training data contained 'h1', 'h2', ..., 'h6' and 'p' HTML tags with only their 'id' and 'class' attributes. 50% of the examples were just plain text. Training example with metadata:
[ "# Work In Progress", "# How to use?\n\nTo generate text with HTML, the sentence must start with ' htmlOn |||' (note the space at the beginning ). To generate normal text, you don't need to add anything.", "# Training details\n\nWe continued the pre-training of gpt2.\n\nDataset:Natural_Questions_HTML_reduced_all\n50% of the examples in the training data contained 'h1', 'h2', ..., 'h6' and 'p' HTML tags with only their 'id' and 'class' attributes. 50% of the examples were just plain text.\n\nTraining example with metadata:" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Work In Progress", "# How to use?\n\nTo generate text with HTML, the sentence must start with ' htmlOn |||' (note the space at the beginning ). To generate normal text, you don't need to add anything.", "# Training details\n\nWe continued the pre-training of gpt2.\n\nDataset:Natural_Questions_HTML_reduced_all\n50% of the examples in the training data contained 'h1', 'h2', ..., 'h6' and 'p' HTML tags with only their 'id' and 'class' attributes. 50% of the examples were just plain text.\n\nTraining example with metadata:" ]
text-classification
transformers
# Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 9522090 ## Validation Metrics - Loss: 0.3541755676269531 - Accuracy: 0.8759671179883946 - Macro F1: 0.5330133182738012 - Micro F1: 0.8759671179883946 - Weighted F1: 0.8482773065757196 - Macro Precision: 0.537738108882869 - Micro Precision: 0.8759671179883946 - Weighted Precision: 0.8241048710814852 - Macro Recall: 0.5316621214820499 - Micro Recall: 0.8759671179883946 - Weighted Recall: 0.8759671179883946 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/bshlgrs/autonlp-classification-9522090 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("bshlgrs/autonlp-classification-9522090", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("bshlgrs/autonlp-classification-9522090", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
{"language": "en", "tags": "autonlp", "datasets": ["bshlgrs/autonlp-data-classification"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]}
bshlgrs/autonlp-classification-9522090
null
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "en", "dataset:bshlgrs/autonlp-data-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #bert #text-classification #autonlp #en #dataset-bshlgrs/autonlp-data-classification #autotrain_compatible #endpoints_compatible #region-us
# Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 9522090 ## Validation Metrics - Loss: 0.3541755676269531 - Accuracy: 0.8759671179883946 - Macro F1: 0.5330133182738012 - Micro F1: 0.8759671179883946 - Weighted F1: 0.8482773065757196 - Macro Precision: 0.537738108882869 - Micro Precision: 0.8759671179883946 - Weighted Precision: 0.8241048710814852 - Macro Recall: 0.5316621214820499 - Micro Recall: 0.8759671179883946 - Weighted Recall: 0.8759671179883946 ## Usage You can use cURL to access this model: Or Python API:
[ "# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 9522090", "## Validation Metrics\n\n- Loss: 0.3541755676269531\n- Accuracy: 0.8759671179883946\n- Macro F1: 0.5330133182738012\n- Micro F1: 0.8759671179883946\n- Weighted F1: 0.8482773065757196\n- Macro Precision: 0.537738108882869\n- Micro Precision: 0.8759671179883946\n- Weighted Precision: 0.8241048710814852\n- Macro Recall: 0.5316621214820499\n- Micro Recall: 0.8759671179883946\n- Weighted Recall: 0.8759671179883946", "## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-bshlgrs/autonlp-data-classification #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 9522090", "## Validation Metrics\n\n- Loss: 0.3541755676269531\n- Accuracy: 0.8759671179883946\n- Macro F1: 0.5330133182738012\n- Micro F1: 0.8759671179883946\n- Weighted F1: 0.8482773065757196\n- Macro Precision: 0.537738108882869\n- Micro Precision: 0.8759671179883946\n- Weighted Precision: 0.8241048710814852\n- Macro Recall: 0.5316621214820499\n- Micro Recall: 0.8759671179883946\n- Weighted Recall: 0.8759671179883946", "## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:" ]
text-classification
transformers
# Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 9532137 ## Validation Metrics - Loss: 0.34556105732917786 - Accuracy: 0.8749890724713699 - Macro F1: 0.5243623959669343 - Micro F1: 0.8749890724713699 - Weighted F1: 0.8638030768409057 - Macro Precision: 0.5016762404900895 - Micro Precision: 0.8749890724713699 - Weighted Precision: 0.8547962562614184 - Macro Recall: 0.5529674694200845 - Micro Recall: 0.8749890724713699 - Weighted Recall: 0.8749890724713699 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/bshlgrs/autonlp-classification_with_all_labellers-9532137 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("bshlgrs/autonlp-classification_with_all_labellers-9532137", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("bshlgrs/autonlp-classification_with_all_labellers-9532137", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
{"language": "en", "tags": "autonlp", "datasets": ["bshlgrs/autonlp-data-classification_with_all_labellers"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]}
bshlgrs/autonlp-classification_with_all_labellers-9532137
null
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "en", "dataset:bshlgrs/autonlp-data-classification_with_all_labellers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #bert #text-classification #autonlp #en #dataset-bshlgrs/autonlp-data-classification_with_all_labellers #autotrain_compatible #endpoints_compatible #region-us
# Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 9532137 ## Validation Metrics - Loss: 0.34556105732917786 - Accuracy: 0.8749890724713699 - Macro F1: 0.5243623959669343 - Micro F1: 0.8749890724713699 - Weighted F1: 0.8638030768409057 - Macro Precision: 0.5016762404900895 - Micro Precision: 0.8749890724713699 - Weighted Precision: 0.8547962562614184 - Macro Recall: 0.5529674694200845 - Micro Recall: 0.8749890724713699 - Weighted Recall: 0.8749890724713699 ## Usage You can use cURL to access this model: Or Python API:
[ "# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 9532137", "## Validation Metrics\n\n- Loss: 0.34556105732917786\n- Accuracy: 0.8749890724713699\n- Macro F1: 0.5243623959669343\n- Micro F1: 0.8749890724713699\n- Weighted F1: 0.8638030768409057\n- Macro Precision: 0.5016762404900895\n- Micro Precision: 0.8749890724713699\n- Weighted Precision: 0.8547962562614184\n- Macro Recall: 0.5529674694200845\n- Micro Recall: 0.8749890724713699\n- Weighted Recall: 0.8749890724713699", "## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-bshlgrs/autonlp-data-classification_with_all_labellers #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 9532137", "## Validation Metrics\n\n- Loss: 0.34556105732917786\n- Accuracy: 0.8749890724713699\n- Macro F1: 0.5243623959669343\n- Micro F1: 0.8749890724713699\n- Weighted F1: 0.8638030768409057\n- Macro Precision: 0.5016762404900895\n- Micro Precision: 0.8749890724713699\n- Weighted Precision: 0.8547962562614184\n- Macro Recall: 0.5529674694200845\n- Micro Recall: 0.8749890724713699\n- Weighted Recall: 0.8749890724713699", "## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:" ]
text-classification
transformers
# Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 10022181 ## Validation Metrics - Loss: 0.369505375623703 - Accuracy: 0.8706206896551724 - Macro F1: 0.5410226656476808 - Micro F1: 0.8706206896551724 - Weighted F1: 0.8515634683886795 - Macro Precision: 0.5159711665622992 - Micro Precision: 0.8706206896551724 - Weighted Precision: 0.8346991124101657 - Macro Recall: 0.5711653346601209 - Micro Recall: 0.8706206896551724 - Weighted Recall: 0.8706206896551724 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/bshlgrs/autonlp-old-data-trained-10022181 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("bshlgrs/autonlp-old-data-trained-10022181", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("bshlgrs/autonlp-old-data-trained-10022181", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
{"language": "en", "tags": "autonlp", "datasets": ["bshlgrs/autonlp-data-old-data-trained"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]}
bshlgrs/autonlp-old-data-trained-10022181
null
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "en", "dataset:bshlgrs/autonlp-data-old-data-trained", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #bert #text-classification #autonlp #en #dataset-bshlgrs/autonlp-data-old-data-trained #autotrain_compatible #endpoints_compatible #region-us
# Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 10022181 ## Validation Metrics - Loss: 0.369505375623703 - Accuracy: 0.8706206896551724 - Macro F1: 0.5410226656476808 - Micro F1: 0.8706206896551724 - Weighted F1: 0.8515634683886795 - Macro Precision: 0.5159711665622992 - Micro Precision: 0.8706206896551724 - Weighted Precision: 0.8346991124101657 - Macro Recall: 0.5711653346601209 - Micro Recall: 0.8706206896551724 - Weighted Recall: 0.8706206896551724 ## Usage You can use cURL to access this model: Or Python API:
[ "# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 10022181", "## Validation Metrics\n\n- Loss: 0.369505375623703\n- Accuracy: 0.8706206896551724\n- Macro F1: 0.5410226656476808\n- Micro F1: 0.8706206896551724\n- Weighted F1: 0.8515634683886795\n- Macro Precision: 0.5159711665622992\n- Micro Precision: 0.8706206896551724\n- Weighted Precision: 0.8346991124101657\n- Macro Recall: 0.5711653346601209\n- Micro Recall: 0.8706206896551724\n- Weighted Recall: 0.8706206896551724", "## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-bshlgrs/autonlp-data-old-data-trained #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 10022181", "## Validation Metrics\n\n- Loss: 0.369505375623703\n- Accuracy: 0.8706206896551724\n- Macro F1: 0.5410226656476808\n- Micro F1: 0.8706206896551724\n- Weighted F1: 0.8515634683886795\n- Macro Precision: 0.5159711665622992\n- Micro Precision: 0.8706206896551724\n- Weighted Precision: 0.8346991124101657\n- Macro Recall: 0.5711653346601209\n- Micro Recall: 0.8706206896551724\n- Weighted Recall: 0.8706206896551724", "## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:" ]
text-classification
transformers
## This model is trained for GoEmotions dataset which contains labeled 58k Reddit comments with 28 emotions - admiration, amusement, anger, annoyance, approval, caring, confusion, curiosity, desire, disappointment, disapproval, disgust, embarrassment, excitement, fear, gratitude, grief, joy, love, nervousness, optimism, pride, realization, relief, remorse, sadness, surprise + neutral ## Training details: - The training script is provided here: https://github.com/bsinghpratap/roberta_train_goEmotion - Please feel free to start an issue in the repo if you have trouble running the model and I would try to respond as soon as possible. - The model works well on most of the emotions except: 'desire', 'disgust', 'embarrassment', 'excitement', 'fear', 'grief', 'nervousness', 'pride', 'relief', 'remorse', 'surprise'] - I'll try to fine-tune the model further and update here if RoBERTa achieves a better performance. - Each text datapoint can have more than 1 label. Most of the training set had 1 label: Counter({1: 36308, 2: 6541, 3: 532, 4: 28, 5: 1}). So currently I just used the first label for each of the datapoint. Not ideal but it does a decent job. ## Model Performance ============================================================<br> Emotion: admiration<br> ============================================================<br> GoEmotions Paper: 0.65<br> RoBERTa: 0.62<br> Support: 504<br> ============================================================<br> Emotion: amusement<br> ============================================================<br> GoEmotions Paper: 0.80<br> RoBERTa: 0.78<br> Support: 252<br> ============================================================<br> Emotion: anger<br> ============================================================<br> GoEmotions Paper: 0.47<br> RoBERTa: 0.44<br> Support: 197<br> ============================================================<br> Emotion: annoyance<br> ============================================================<br> GoEmotions Paper: 0.34<br> RoBERTa: 0.22<br> Support: 286<br> ============================================================<br> Emotion: approval<br> ============================================================<br> GoEmotions Paper: 0.36<br> RoBERTa: 0.31<br> Support: 318<br> ============================================================<br> Emotion: caring<br> ============================================================<br> GoEmotions Paper: 0.39<br> RoBERTa: 0.24<br> Support: 114<br> ============================================================<br> Emotion: confusion<br> ============================================================<br> GoEmotions Paper: 0.37<br> RoBERTa: 0.29<br> Support: 139<br> ============================================================<br> Emotion: curiosity<br> ============================================================<br> GoEmotions Paper: 0.54<br> RoBERTa: 0.48<br> Support: 233<br> ============================================================<br> Emotion: disappointment<br> ============================================================<br> GoEmotions Paper: 0.28<br> RoBERTa: 0.18<br> Support: 127<br> ============================================================<br> Emotion: disapproval<br> ============================================================<br> GoEmotions Paper: 0.39<br> RoBERTa: 0.26<br> Support: 220<br> ============================================================<br> Emotion: gratitude<br> ============================================================<br> GoEmotions Paper: 0.86<br> RoBERTa: 0.84<br> Support: 288<br> ============================================================<br> Emotion: joy<br> ============================================================<br> GoEmotions Paper: 0.51<br> RoBERTa: 0.47<br> Support: 116<br> ============================================================<br> Emotion: love<br> ============================================================<br> GoEmotions Paper: 0.78<br> RoBERTa: 0.68<br> Support: 169<br> ============================================================<br> Emotion: neutral<br> ============================================================<br> GoEmotions Paper: 0.68<br> RoBERTa: 0.61<br> Support: 1606<br> ============================================================<br> Emotion: optimism<br> ============================================================<br> GoEmotions Paper: 0.51<br> RoBERTa: 0.52<br> Support: 120<br> ============================================================<br> Emotion: realization<br> ============================================================<br> GoEmotions Paper: 0.21<br> RoBERTa: 0.15<br> Support: 109<br> ============================================================<br> Emotion: sadness<br> ============================================================<br> GoEmotions Paper: 0.49<br> RoBERTa: 0.42<br> Support: 108
{"language": "en", "license": "mit", "tags": ["text-classification", "pytorch", "roberta", "emotions"], "datasets": ["go_emotions"], "widget": [{"text": "I am not feeling well today."}]}
bsingh/roberta_goEmotion
null
[ "transformers", "pytorch", "roberta", "text-classification", "emotions", "en", "dataset:go_emotions", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #roberta #text-classification #emotions #en #dataset-go_emotions #license-mit #autotrain_compatible #endpoints_compatible #region-us
## This model is trained for GoEmotions dataset which contains labeled 58k Reddit comments with 28 emotions - admiration, amusement, anger, annoyance, approval, caring, confusion, curiosity, desire, disappointment, disapproval, disgust, embarrassment, excitement, fear, gratitude, grief, joy, love, nervousness, optimism, pride, realization, relief, remorse, sadness, surprise + neutral ## Training details: - The training script is provided here: URL - Please feel free to start an issue in the repo if you have trouble running the model and I would try to respond as soon as possible. - The model works well on most of the emotions except: 'desire', 'disgust', 'embarrassment', 'excitement', 'fear', 'grief', 'nervousness', 'pride', 'relief', 'remorse', 'surprise'] - I'll try to fine-tune the model further and update here if RoBERTa achieves a better performance. - Each text datapoint can have more than 1 label. Most of the training set had 1 label: Counter({1: 36308, 2: 6541, 3: 532, 4: 28, 5: 1}). So currently I just used the first label for each of the datapoint. Not ideal but it does a decent job. ## Model Performance ============================================================<br> Emotion: admiration<br> ============================================================<br> GoEmotions Paper: 0.65<br> RoBERTa: 0.62<br> Support: 504<br> ============================================================<br> Emotion: amusement<br> ============================================================<br> GoEmotions Paper: 0.80<br> RoBERTa: 0.78<br> Support: 252<br> ============================================================<br> Emotion: anger<br> ============================================================<br> GoEmotions Paper: 0.47<br> RoBERTa: 0.44<br> Support: 197<br> ============================================================<br> Emotion: annoyance<br> ============================================================<br> GoEmotions Paper: 0.34<br> RoBERTa: 0.22<br> Support: 286<br> ============================================================<br> Emotion: approval<br> ============================================================<br> GoEmotions Paper: 0.36<br> RoBERTa: 0.31<br> Support: 318<br> ============================================================<br> Emotion: caring<br> ============================================================<br> GoEmotions Paper: 0.39<br> RoBERTa: 0.24<br> Support: 114<br> ============================================================<br> Emotion: confusion<br> ============================================================<br> GoEmotions Paper: 0.37<br> RoBERTa: 0.29<br> Support: 139<br> ============================================================<br> Emotion: curiosity<br> ============================================================<br> GoEmotions Paper: 0.54<br> RoBERTa: 0.48<br> Support: 233<br> ============================================================<br> Emotion: disappointment<br> ============================================================<br> GoEmotions Paper: 0.28<br> RoBERTa: 0.18<br> Support: 127<br> ============================================================<br> Emotion: disapproval<br> ============================================================<br> GoEmotions Paper: 0.39<br> RoBERTa: 0.26<br> Support: 220<br> ============================================================<br> Emotion: gratitude<br> ============================================================<br> GoEmotions Paper: 0.86<br> RoBERTa: 0.84<br> Support: 288<br> ============================================================<br> Emotion: joy<br> ============================================================<br> GoEmotions Paper: 0.51<br> RoBERTa: 0.47<br> Support: 116<br> ============================================================<br> Emotion: love<br> ============================================================<br> GoEmotions Paper: 0.78<br> RoBERTa: 0.68<br> Support: 169<br> ============================================================<br> Emotion: neutral<br> ============================================================<br> GoEmotions Paper: 0.68<br> RoBERTa: 0.61<br> Support: 1606<br> ============================================================<br> Emotion: optimism<br> ============================================================<br> GoEmotions Paper: 0.51<br> RoBERTa: 0.52<br> Support: 120<br> ============================================================<br> Emotion: realization<br> ============================================================<br> GoEmotions Paper: 0.21<br> RoBERTa: 0.15<br> Support: 109<br> ============================================================<br> Emotion: sadness<br> ============================================================<br> GoEmotions Paper: 0.49<br> RoBERTa: 0.42<br> Support: 108
[ "## This model is trained for GoEmotions dataset which contains labeled 58k Reddit comments with 28 emotions\n- admiration, amusement, anger, annoyance, approval, caring, confusion, curiosity, desire, disappointment, disapproval, disgust, embarrassment, excitement, fear, gratitude, grief, joy, love, nervousness, optimism, pride, realization, relief, remorse, sadness, surprise + neutral", "## Training details:\n- The training script is provided here: URL\n- Please feel free to start an issue in the repo if you have trouble running the model and I would try to respond as soon as possible.\n- The model works well on most of the emotions except: 'desire', 'disgust', 'embarrassment', 'excitement', 'fear', 'grief', 'nervousness', 'pride', 'relief', 'remorse', 'surprise']\n- I'll try to fine-tune the model further and update here if RoBERTa achieves a better performance.\n- Each text datapoint can have more than 1 label. Most of the training set had 1 label: Counter({1: 36308, 2: 6541, 3: 532, 4: 28, 5: 1}). So currently I just used the first label for each of the datapoint. Not ideal but it does a decent job.", "## Model Performance\n============================================================<br>\nEmotion: admiration<br>\n============================================================<br>\nGoEmotions Paper: 0.65<br>\nRoBERTa: 0.62<br>\nSupport: 504<br>\n============================================================<br>\nEmotion: amusement<br>\n============================================================<br>\nGoEmotions Paper: 0.80<br>\nRoBERTa: 0.78<br>\nSupport: 252<br>\n============================================================<br>\nEmotion: anger<br>\n============================================================<br>\nGoEmotions Paper: 0.47<br>\nRoBERTa: 0.44<br>\nSupport: 197<br>\n============================================================<br>\nEmotion: annoyance<br>\n============================================================<br>\nGoEmotions Paper: 0.34<br>\nRoBERTa: 0.22<br>\nSupport: 286<br>\n============================================================<br>\nEmotion: approval<br>\n============================================================<br>\nGoEmotions Paper: 0.36<br>\nRoBERTa: 0.31<br>\nSupport: 318<br>\n============================================================<br>\nEmotion: caring<br>\n============================================================<br>\nGoEmotions Paper: 0.39<br>\nRoBERTa: 0.24<br>\nSupport: 114<br>\n============================================================<br>\nEmotion: confusion<br>\n============================================================<br>\nGoEmotions Paper: 0.37<br>\nRoBERTa: 0.29<br>\nSupport: 139<br>\n============================================================<br>\nEmotion: curiosity<br>\n============================================================<br>\nGoEmotions Paper: 0.54<br>\nRoBERTa: 0.48<br>\nSupport: 233<br>\n============================================================<br>\nEmotion: disappointment<br>\n============================================================<br>\nGoEmotions Paper: 0.28<br>\nRoBERTa: 0.18<br>\nSupport: 127<br>\n============================================================<br>\nEmotion: disapproval<br>\n============================================================<br>\nGoEmotions Paper: 0.39<br>\nRoBERTa: 0.26<br>\nSupport: 220<br>\n============================================================<br>\nEmotion: gratitude<br>\n============================================================<br>\nGoEmotions Paper: 0.86<br>\nRoBERTa: 0.84<br>\nSupport: 288<br>\n============================================================<br>\nEmotion: joy<br>\n============================================================<br>\nGoEmotions Paper: 0.51<br>\nRoBERTa: 0.47<br>\nSupport: 116<br>\n============================================================<br>\nEmotion: love<br>\n============================================================<br>\nGoEmotions Paper: 0.78<br>\nRoBERTa: 0.68<br>\nSupport: 169<br>\n============================================================<br>\nEmotion: neutral<br>\n============================================================<br>\nGoEmotions Paper: 0.68<br>\nRoBERTa: 0.61<br>\nSupport: 1606<br>\n============================================================<br>\nEmotion: optimism<br>\n============================================================<br>\nGoEmotions Paper: 0.51<br>\nRoBERTa: 0.52<br>\nSupport: 120<br>\n============================================================<br>\nEmotion: realization<br>\n============================================================<br>\nGoEmotions Paper: 0.21<br>\nRoBERTa: 0.15<br>\nSupport: 109<br>\n============================================================<br>\nEmotion: sadness<br>\n============================================================<br>\nGoEmotions Paper: 0.49<br>\nRoBERTa: 0.42<br>\nSupport: 108" ]
[ "TAGS\n#transformers #pytorch #roberta #text-classification #emotions #en #dataset-go_emotions #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "## This model is trained for GoEmotions dataset which contains labeled 58k Reddit comments with 28 emotions\n- admiration, amusement, anger, annoyance, approval, caring, confusion, curiosity, desire, disappointment, disapproval, disgust, embarrassment, excitement, fear, gratitude, grief, joy, love, nervousness, optimism, pride, realization, relief, remorse, sadness, surprise + neutral", "## Training details:\n- The training script is provided here: URL\n- Please feel free to start an issue in the repo if you have trouble running the model and I would try to respond as soon as possible.\n- The model works well on most of the emotions except: 'desire', 'disgust', 'embarrassment', 'excitement', 'fear', 'grief', 'nervousness', 'pride', 'relief', 'remorse', 'surprise']\n- I'll try to fine-tune the model further and update here if RoBERTa achieves a better performance.\n- Each text datapoint can have more than 1 label. Most of the training set had 1 label: Counter({1: 36308, 2: 6541, 3: 532, 4: 28, 5: 1}). So currently I just used the first label for each of the datapoint. Not ideal but it does a decent job.", "## Model Performance\n============================================================<br>\nEmotion: admiration<br>\n============================================================<br>\nGoEmotions Paper: 0.65<br>\nRoBERTa: 0.62<br>\nSupport: 504<br>\n============================================================<br>\nEmotion: amusement<br>\n============================================================<br>\nGoEmotions Paper: 0.80<br>\nRoBERTa: 0.78<br>\nSupport: 252<br>\n============================================================<br>\nEmotion: anger<br>\n============================================================<br>\nGoEmotions Paper: 0.47<br>\nRoBERTa: 0.44<br>\nSupport: 197<br>\n============================================================<br>\nEmotion: annoyance<br>\n============================================================<br>\nGoEmotions Paper: 0.34<br>\nRoBERTa: 0.22<br>\nSupport: 286<br>\n============================================================<br>\nEmotion: approval<br>\n============================================================<br>\nGoEmotions Paper: 0.36<br>\nRoBERTa: 0.31<br>\nSupport: 318<br>\n============================================================<br>\nEmotion: caring<br>\n============================================================<br>\nGoEmotions Paper: 0.39<br>\nRoBERTa: 0.24<br>\nSupport: 114<br>\n============================================================<br>\nEmotion: confusion<br>\n============================================================<br>\nGoEmotions Paper: 0.37<br>\nRoBERTa: 0.29<br>\nSupport: 139<br>\n============================================================<br>\nEmotion: curiosity<br>\n============================================================<br>\nGoEmotions Paper: 0.54<br>\nRoBERTa: 0.48<br>\nSupport: 233<br>\n============================================================<br>\nEmotion: disappointment<br>\n============================================================<br>\nGoEmotions Paper: 0.28<br>\nRoBERTa: 0.18<br>\nSupport: 127<br>\n============================================================<br>\nEmotion: disapproval<br>\n============================================================<br>\nGoEmotions Paper: 0.39<br>\nRoBERTa: 0.26<br>\nSupport: 220<br>\n============================================================<br>\nEmotion: gratitude<br>\n============================================================<br>\nGoEmotions Paper: 0.86<br>\nRoBERTa: 0.84<br>\nSupport: 288<br>\n============================================================<br>\nEmotion: joy<br>\n============================================================<br>\nGoEmotions Paper: 0.51<br>\nRoBERTa: 0.47<br>\nSupport: 116<br>\n============================================================<br>\nEmotion: love<br>\n============================================================<br>\nGoEmotions Paper: 0.78<br>\nRoBERTa: 0.68<br>\nSupport: 169<br>\n============================================================<br>\nEmotion: neutral<br>\n============================================================<br>\nGoEmotions Paper: 0.68<br>\nRoBERTa: 0.61<br>\nSupport: 1606<br>\n============================================================<br>\nEmotion: optimism<br>\n============================================================<br>\nGoEmotions Paper: 0.51<br>\nRoBERTa: 0.52<br>\nSupport: 120<br>\n============================================================<br>\nEmotion: realization<br>\n============================================================<br>\nGoEmotions Paper: 0.21<br>\nRoBERTa: 0.15<br>\nSupport: 109<br>\n============================================================<br>\nEmotion: sadness<br>\n============================================================<br>\nGoEmotions Paper: 0.49<br>\nRoBERTa: 0.42<br>\nSupport: 108" ]
text-generation
transformers
# Yoda DialoGPT Model
{"tags": ["conversational"]}
bspans/DialoGPT-small-yoda
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Yoda DialoGPT Model
[ "# Yoda DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Yoda DialoGPT Model" ]
fill-mask
transformers
# hseBERT **hseBert-it-cased** is a BERT model obtained by MLM adaptive-tuning [**bert-base-italian-xxl-cased**](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on texts of Italian regulation (Testo unico sulla sicurezza sul lavoro - D.lgs. 9 aprile 2008, n. 81, Codice dell'Ambiente - D.lgs. 3 aprile 2006, n. 152), approximately 7k sentences. # Usage ```python from transformers import AutoModel, AutoTokenizer model_name = "bullmount/hseBert-it-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ```
{"language": "it", "license": "mit", "widget": [{"text": "\u00c8 stata pubblicata la [MASK] di conversione del D.L. 24 dicembre 2021 n. 221 ."}, {"text": "La legge fornisce l\u2019esatta [MASK] di Green pass base."}, {"text": "Il datore di lavoro organizza e predispone i posti di lavoro di cui all'articolo 173, in [MASK] ai requisiti minimi di cui all'allegato XXXIV."}, {"text": "Le principali novit\u00e0 riguardano la quarantena precauzionale e il [MASK] di autosorveglianza."}]}
bullmount/hseBert-it-cased
null
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "it", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "it" ]
TAGS #transformers #pytorch #tensorboard #bert #fill-mask #it #license-mit #autotrain_compatible #endpoints_compatible #region-us
# hseBERT hseBert-it-cased is a BERT model obtained by MLM adaptive-tuning bert-base-italian-xxl-cased on texts of Italian regulation (Testo unico sulla sicurezza sul lavoro - D.lgs. 9 aprile 2008, n. 81, Codice dell'Ambiente - D.lgs. 3 aprile 2006, n. 152), approximately 7k sentences. # Usage
[ "# hseBERT\n\nhseBert-it-cased is a BERT model obtained by MLM adaptive-tuning bert-base-italian-xxl-cased on texts of Italian regulation (Testo unico sulla sicurezza sul lavoro - D.lgs. 9 aprile 2008, n. 81, Codice dell'Ambiente - D.lgs. 3 aprile 2006, n. 152), approximately 7k sentences.", "# Usage" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #it #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# hseBERT\n\nhseBert-it-cased is a BERT model obtained by MLM adaptive-tuning bert-base-italian-xxl-cased on texts of Italian regulation (Testo unico sulla sicurezza sul lavoro - D.lgs. 9 aprile 2008, n. 81, Codice dell'Ambiente - D.lgs. 3 aprile 2006, n. 152), approximately 7k sentences.", "# Usage" ]
token-classification
transformers
tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-it results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.it metrics: - name: F1 type: f1 value: 0.9097618003799502 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1417 - F1: 0.9098 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2754 | 1.0 | 834 | 0.1683 | 0.8717 | | 0.1366 | 2.0 | 1668 | 0.1449 | 0.8921 | | 0.0863 | 3.0 | 2502 | 0.1417 | 0.9098 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "mit", "widget": [{"text": "Luigi \u00e8 nato a Roma."}, {"text": "Antonio ha chiesto ad Alessia di recarsi alla sede INAIL."}]}
bullmount/xlm-roberta-base-finetuned-panx-it
null
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #xlm-roberta #token-classification #license-mit #autotrain_compatible #endpoints_compatible #region-us
tags: * generated\_from\_trainer datasets: * xtreme metrics: * f1 model-index: * name: xlm-roberta-base-finetuned-panx-it results: + task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: URL metrics: - name: F1 type: f1 value: 0.9097618003799502 --- xlm-roberta-base-finetuned-panx-it ================================== This model is a fine-tuned version of xlm-roberta-base on the xtreme dataset. It achieves the following results on the evaluation set: * Loss: 0.1417 * F1: 0.9098 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 24 * eval\_batch\_size: 24 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #token-classification #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
null
null
mmmm
{}
bumhead/SnarlyTrain
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #region-us
mmmm
[]
[ "TAGS\n#region-us \n" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0586 - Precision: 0.9390 - Recall: 0.9554 - F1: 0.9471 - Accuracy: 0.9873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0877 | 1.0 | 1756 | 0.0662 | 0.9081 | 0.9344 | 0.9210 | 0.9827 | | 0.0376 | 2.0 | 3512 | 0.0599 | 0.9362 | 0.9502 | 0.9431 | 0.9862 | | 0.0209 | 3.0 | 5268 | 0.0586 | 0.9390 | 0.9554 | 0.9471 | 0.9873 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9389679126695336, "name": "Precision"}, {"type": "recall", "value": 0.9554022214742511, "name": "Recall"}, {"type": "f1", "value": 0.9471137804471137, "name": "F1"}, {"type": "accuracy", "value": 0.9873138282215812, "name": "Accuracy"}]}]}]}
butchland/bert-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
bert-finetuned-ner ================== This model is a fine-tuned version of bert-base-cased on the conll2003 dataset. It achieves the following results on the evaluation set: * Loss: 0.0586 * Precision: 0.9390 * Recall: 0.9554 * F1: 0.9471 * Accuracy: 0.9873 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.14.1 * Pytorch 1.10.0+cu111 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
text-classification
transformers
# CORe Model - Clinical Diagnosis Prediction ## Model description The CORe (_Clinical Outcome Representations_) model is introduced in the paper [Clinical Outcome Predictions from Admission Notes using Self-Supervised Knowledge Integration](https://www.aclweb.org/anthology/2021.eacl-main.75.pdf). It is based on BioBERT and further pre-trained on clinical notes, disease descriptions and medical articles with a specialised _Clinical Outcome Pre-Training_ objective. This model checkpoint is **fine-tuned on the task of diagnosis prediction**. The model expects patient admission notes as input and outputs multi-label ICD9-code predictions. #### Model Predictions The model makes predictions on a total of 9237 labels. These contain 3- and 4-digit ICD9 codes and textual descriptions of these codes. The 4-digit codes and textual descriptions help to incorporate further topical and hierarchical information into the model during training (see Section 4.2 _ICD+: Incorporation of ICD Hierarchy_ in our paper). We recommend to only use the **3-digit code predictions at inference time**, because only those have been evaluated in our work. #### How to use CORe Diagnosis Prediction You can load the model via the transformers library: ``` from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("bvanaken/CORe-clinical-diagnosis-prediction") model = AutoModelForSequenceClassification.from_pretrained("bvanaken/CORe-clinical-diagnosis-prediction") ``` The following code shows an inference example: ``` input = "CHIEF COMPLAINT: Headaches\n\nPRESENT ILLNESS: 58yo man w/ hx of hypertension, AFib on coumadin presented to ED with the worst headache of his life." tokenized_input = tokenizer(input, return_tensors="pt") output = model(**tokenized_input) import torch predictions = torch.sigmoid(output.logits) predicted_labels = [model.config.id2label[_id] for _id in (predictions > 0.3).nonzero()[:, 1].tolist()] ``` Note: For the best performance, we recommend to determine the thresholds (0.3 in this example) individually per label. ### More Information For all the details about CORe and contact info, please visit [CORe.app.datexis.com](http://core.app.datexis.com/). ### Cite ```bibtex @inproceedings{vanaken21, author = {Betty van Aken and Jens-Michalis Papaioannou and Manuel Mayrdorfer and Klemens Budde and Felix A. Gers and Alexander Löser}, title = {Clinical Outcome Prediction from Admission Notes using Self-Supervised Knowledge Integration}, booktitle = {Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, {EACL} 2021, Online, April 19 - 23, 2021}, publisher = {Association for Computational Linguistics}, year = {2021}, } ```
{"language": "en", "tags": ["bert", "medical", "clinical", "diagnosis", "text-classification"], "thumbnail": "https://core.app.datexis.com/static/paper.png", "widget": [{"text": "Patient with hypertension presents to ICU."}]}
DATEXIS/CORe-clinical-diagnosis-prediction
null
[ "transformers", "pytorch", "bert", "text-classification", "medical", "clinical", "diagnosis", "en", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #bert #text-classification #medical #clinical #diagnosis #en #autotrain_compatible #endpoints_compatible #has_space #region-us
# CORe Model - Clinical Diagnosis Prediction ## Model description The CORe (_Clinical Outcome Representations_) model is introduced in the paper Clinical Outcome Predictions from Admission Notes using Self-Supervised Knowledge Integration. It is based on BioBERT and further pre-trained on clinical notes, disease descriptions and medical articles with a specialised _Clinical Outcome Pre-Training_ objective. This model checkpoint is fine-tuned on the task of diagnosis prediction. The model expects patient admission notes as input and outputs multi-label ICD9-code predictions. #### Model Predictions The model makes predictions on a total of 9237 labels. These contain 3- and 4-digit ICD9 codes and textual descriptions of these codes. The 4-digit codes and textual descriptions help to incorporate further topical and hierarchical information into the model during training (see Section 4.2 _ICD+: Incorporation of ICD Hierarchy_ in our paper). We recommend to only use the 3-digit code predictions at inference time, because only those have been evaluated in our work. #### How to use CORe Diagnosis Prediction You can load the model via the transformers library: The following code shows an inference example: Note: For the best performance, we recommend to determine the thresholds (0.3 in this example) individually per label. ### More Information For all the details about CORe and contact info, please visit URL. ### Cite
[ "# CORe Model - Clinical Diagnosis Prediction", "## Model description\n\nThe CORe (_Clinical Outcome Representations_) model is introduced in the paper Clinical Outcome Predictions from Admission Notes using Self-Supervised Knowledge Integration.\nIt is based on BioBERT and further pre-trained on clinical notes, disease descriptions and medical articles with a specialised _Clinical Outcome Pre-Training_ objective.\n\nThis model checkpoint is fine-tuned on the task of diagnosis prediction.\nThe model expects patient admission notes as input and outputs multi-label ICD9-code predictions.", "#### Model Predictions\nThe model makes predictions on a total of 9237 labels. These contain 3- and 4-digit ICD9 codes and textual descriptions of these codes. The 4-digit codes and textual descriptions help to incorporate further topical and hierarchical information into the model during training (see Section 4.2 _ICD+: Incorporation of ICD Hierarchy_ in our paper). We recommend to only use the 3-digit code predictions at inference time, because only those have been evaluated in our work.", "#### How to use CORe Diagnosis Prediction\n\nYou can load the model via the transformers library:\n\n\nThe following code shows an inference example:\n\n\nNote: For the best performance, we recommend to determine the thresholds (0.3 in this example) individually per label.", "### More Information\n\nFor all the details about CORe and contact info, please visit URL.", "### Cite" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #medical #clinical #diagnosis #en #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# CORe Model - Clinical Diagnosis Prediction", "## Model description\n\nThe CORe (_Clinical Outcome Representations_) model is introduced in the paper Clinical Outcome Predictions from Admission Notes using Self-Supervised Knowledge Integration.\nIt is based on BioBERT and further pre-trained on clinical notes, disease descriptions and medical articles with a specialised _Clinical Outcome Pre-Training_ objective.\n\nThis model checkpoint is fine-tuned on the task of diagnosis prediction.\nThe model expects patient admission notes as input and outputs multi-label ICD9-code predictions.", "#### Model Predictions\nThe model makes predictions on a total of 9237 labels. These contain 3- and 4-digit ICD9 codes and textual descriptions of these codes. The 4-digit codes and textual descriptions help to incorporate further topical and hierarchical information into the model during training (see Section 4.2 _ICD+: Incorporation of ICD Hierarchy_ in our paper). We recommend to only use the 3-digit code predictions at inference time, because only those have been evaluated in our work.", "#### How to use CORe Diagnosis Prediction\n\nYou can load the model via the transformers library:\n\n\nThe following code shows an inference example:\n\n\nNote: For the best performance, we recommend to determine the thresholds (0.3 in this example) individually per label.", "### More Information\n\nFor all the details about CORe and contact info, please visit URL.", "### Cite" ]
text-classification
transformers
# CORe Model - Clinical Mortality Risk Prediction ## Model description The CORe (_Clinical Outcome Representations_) model is introduced in the paper [Clinical Outcome Predictions from Admission Notes using Self-Supervised Knowledge Integration](https://www.aclweb.org/anthology/2021.eacl-main.75.pdf). It is based on BioBERT and further pre-trained on clinical notes, disease descriptions and medical articles with a specialised _Clinical Outcome Pre-Training_ objective. This model checkpoint is **fine-tuned on the task of mortality risk prediction**. The model expects patient admission notes as input and outputs the predicted risk of in-hospital mortality. #### How to use CORe Mortality Risk Prediction You can load the model via the transformers library: ``` from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("bvanaken/CORe-clinical-mortality-prediction") model = AutoModelForSequenceClassification.from_pretrained("bvanaken/CORe-clinical-mortality-prediction") ``` The following code shows an inference example: ``` input = "CHIEF COMPLAINT: Headaches\n\nPRESENT ILLNESS: 58yo man w/ hx of hypertension, AFib on coumadin presented to ED with the worst headache of his life." tokenized_input = tokenizer(input, return_tensors="pt") output = model(**tokenized_input) import torch predictions = torch.softmax(output.logits.detach(), dim=1) mortality_risk_prediction = predictions[0][1].item() ``` ### More Information For all the details about CORe and contact info, please visit [CORe.app.datexis.com](http://core.app.datexis.com/). ### Cite ```bibtex @inproceedings{vanaken21, author = {Betty van Aken and Jens-Michalis Papaioannou and Manuel Mayrdorfer and Klemens Budde and Felix A. Gers and Alexander Löser}, title = {Clinical Outcome Prediction from Admission Notes using Self-Supervised Knowledge Integration}, booktitle = {Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, {EACL} 2021, Online, April 19 - 23, 2021}, publisher = {Association for Computational Linguistics}, year = {2021}, } ```
{"language": "en", "tags": ["bert", "medical", "clinical", "mortality"], "thumbnail": "https://core.app.datexis.com/static/paper.png"}
DATEXIS/CORe-clinical-mortality-prediction
null
[ "transformers", "pytorch", "bert", "text-classification", "medical", "clinical", "mortality", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #bert #text-classification #medical #clinical #mortality #en #autotrain_compatible #endpoints_compatible #region-us
# CORe Model - Clinical Mortality Risk Prediction ## Model description The CORe (_Clinical Outcome Representations_) model is introduced in the paper Clinical Outcome Predictions from Admission Notes using Self-Supervised Knowledge Integration. It is based on BioBERT and further pre-trained on clinical notes, disease descriptions and medical articles with a specialised _Clinical Outcome Pre-Training_ objective. This model checkpoint is fine-tuned on the task of mortality risk prediction. The model expects patient admission notes as input and outputs the predicted risk of in-hospital mortality. #### How to use CORe Mortality Risk Prediction You can load the model via the transformers library: The following code shows an inference example: ### More Information For all the details about CORe and contact info, please visit URL. ### Cite
[ "# CORe Model - Clinical Mortality Risk Prediction", "## Model description\n\nThe CORe (_Clinical Outcome Representations_) model is introduced in the paper Clinical Outcome Predictions from Admission Notes using Self-Supervised Knowledge Integration.\nIt is based on BioBERT and further pre-trained on clinical notes, disease descriptions and medical articles with a specialised _Clinical Outcome Pre-Training_ objective.\n\nThis model checkpoint is fine-tuned on the task of mortality risk prediction.\nThe model expects patient admission notes as input and outputs the predicted risk of in-hospital mortality.", "#### How to use CORe Mortality Risk Prediction\n\nYou can load the model via the transformers library:\n\n\nThe following code shows an inference example:", "### More Information\n\nFor all the details about CORe and contact info, please visit URL.", "### Cite" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #medical #clinical #mortality #en #autotrain_compatible #endpoints_compatible #region-us \n", "# CORe Model - Clinical Mortality Risk Prediction", "## Model description\n\nThe CORe (_Clinical Outcome Representations_) model is introduced in the paper Clinical Outcome Predictions from Admission Notes using Self-Supervised Knowledge Integration.\nIt is based on BioBERT and further pre-trained on clinical notes, disease descriptions and medical articles with a specialised _Clinical Outcome Pre-Training_ objective.\n\nThis model checkpoint is fine-tuned on the task of mortality risk prediction.\nThe model expects patient admission notes as input and outputs the predicted risk of in-hospital mortality.", "#### How to use CORe Mortality Risk Prediction\n\nYou can load the model via the transformers library:\n\n\nThe following code shows an inference example:", "### More Information\n\nFor all the details about CORe and contact info, please visit URL.", "### Cite" ]
null
transformers
# CORe Model - BioBERT + Clinical Outcome Pre-Training ## Model description The CORe (_Clinical Outcome Representations_) model is introduced in the paper [Clinical Outcome Predictions from Admission Notes using Self-Supervised Knowledge Integration](https://www.aclweb.org/anthology/2021.eacl-main.75.pdf). It is based on BioBERT and further pre-trained on clinical notes, disease descriptions and medical articles with a specialised _Clinical Outcome Pre-Training_ objective. #### How to use CORe You can load the model via the transformers library: ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("bvanaken/CORe-clinical-outcome-biobert-v1") model = AutoModel.from_pretrained("bvanaken/CORe-clinical-outcome-biobert-v1") ``` From there, you can fine-tune it on clinical tasks that benefit from patient outcome knowledge. ### Pre-Training Data The model is based on [BioBERT](https://huggingface.co/dmis-lab/biobert-v1.1) pre-trained on PubMed data. The _Clinical Outcome Pre-Training_ included discharge summaries from the MIMIC III training set (specified [here](https://github.com/bvanaken/clinical-outcome-prediction/blob/master/tasks/mimic_train.csv)), medical transcriptions from [MTSamples](https://mtsamples.com/) and clinical notes from the i2b2 challenges 2006-2012. It further includes ~10k case reports from PubMed Central (PMC), disease articles from Wikipedia and article sections from the [MedQuAd](https://github.com/abachaa/MedQuAD) dataset extracted from NIH websites. ### More Information For all the details about CORe and contact info, please visit [CORe.app.datexis.com](http://core.app.datexis.com/). ### Cite ```bibtex @inproceedings{vanaken21, author = {Betty van Aken and Jens-Michalis Papaioannou and Manuel Mayrdorfer and Klemens Budde and Felix A. Gers and Alexander Löser}, title = {Clinical Outcome Prediction from Admission Notes using Self-Supervised Knowledge Integration}, booktitle = {Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, {EACL} 2021, Online, April 19 - 23, 2021}, publisher = {Association for Computational Linguistics}, year = {2021}, } ```
{"language": "en", "tags": ["bert", "medical", "clinical"], "thumbnail": "https://core.app.datexis.com/static/paper.png"}
bvanaken/CORe-clinical-outcome-biobert-v1
null
[ "transformers", "pytorch", "jax", "bert", "medical", "clinical", "en", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #jax #bert #medical #clinical #en #endpoints_compatible #region-us
# CORe Model - BioBERT + Clinical Outcome Pre-Training ## Model description The CORe (_Clinical Outcome Representations_) model is introduced in the paper Clinical Outcome Predictions from Admission Notes using Self-Supervised Knowledge Integration. It is based on BioBERT and further pre-trained on clinical notes, disease descriptions and medical articles with a specialised _Clinical Outcome Pre-Training_ objective. #### How to use CORe You can load the model via the transformers library: From there, you can fine-tune it on clinical tasks that benefit from patient outcome knowledge. ### Pre-Training Data The model is based on BioBERT pre-trained on PubMed data. The _Clinical Outcome Pre-Training_ included discharge summaries from the MIMIC III training set (specified here), medical transcriptions from MTSamples and clinical notes from the i2b2 challenges 2006-2012. It further includes ~10k case reports from PubMed Central (PMC), disease articles from Wikipedia and article sections from the MedQuAd dataset extracted from NIH websites. ### More Information For all the details about CORe and contact info, please visit URL. ### Cite
[ "# CORe Model - BioBERT + Clinical Outcome Pre-Training", "## Model description\n\nThe CORe (_Clinical Outcome Representations_) model is introduced in the paper Clinical Outcome Predictions from Admission Notes using Self-Supervised Knowledge Integration.\nIt is based on BioBERT and further pre-trained on clinical notes, disease descriptions and medical articles with a specialised _Clinical Outcome Pre-Training_ objective.", "#### How to use CORe\n\nYou can load the model via the transformers library:\n\nFrom there, you can fine-tune it on clinical tasks that benefit from patient outcome knowledge.", "### Pre-Training Data\n\nThe model is based on BioBERT pre-trained on PubMed data.\nThe _Clinical Outcome Pre-Training_ included discharge summaries from the MIMIC III training set (specified here), medical transcriptions from MTSamples and clinical notes from the i2b2 challenges 2006-2012. It further includes ~10k case reports from PubMed Central (PMC), disease articles from Wikipedia and article sections from the MedQuAd dataset extracted from NIH websites.", "### More Information\n\nFor all the details about CORe and contact info, please visit URL.", "### Cite" ]
[ "TAGS\n#transformers #pytorch #jax #bert #medical #clinical #en #endpoints_compatible #region-us \n", "# CORe Model - BioBERT + Clinical Outcome Pre-Training", "## Model description\n\nThe CORe (_Clinical Outcome Representations_) model is introduced in the paper Clinical Outcome Predictions from Admission Notes using Self-Supervised Knowledge Integration.\nIt is based on BioBERT and further pre-trained on clinical notes, disease descriptions and medical articles with a specialised _Clinical Outcome Pre-Training_ objective.", "#### How to use CORe\n\nYou can load the model via the transformers library:\n\nFrom there, you can fine-tune it on clinical tasks that benefit from patient outcome knowledge.", "### Pre-Training Data\n\nThe model is based on BioBERT pre-trained on PubMed data.\nThe _Clinical Outcome Pre-Training_ included discharge summaries from the MIMIC III training set (specified here), medical transcriptions from MTSamples and clinical notes from the i2b2 challenges 2006-2012. It further includes ~10k case reports from PubMed Central (PMC), disease articles from Wikipedia and article sections from the MedQuAd dataset extracted from NIH websites.", "### More Information\n\nFor all the details about CORe and contact info, please visit URL.", "### Cite" ]
text-classification
transformers
# Clinical Assertion / Negation Classification BERT ## Model description The Clinical Assertion and Negation Classification BERT is introduced in the paper [Assertion Detection in Clinical Notes: Medical Language Models to the Rescue? ](https://aclanthology.org/2021.nlpmc-1.5/). The model helps structure information in clinical patient letters by classifying medical conditions mentioned in the letter into PRESENT, ABSENT and POSSIBLE. The model is based on the [ClinicalBERT - Bio + Discharge Summary BERT Model](https://huggingface.co/emilyalsentzer/Bio_Discharge_Summary_BERT) by Alsentzer et al. and fine-tuned on assertion data from the [2010 i2b2 challenge](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3168320/). #### How to use the model You can load the model via the transformers library: ``` from transformers import AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline tokenizer = AutoTokenizer.from_pretrained("bvanaken/clinical-assertion-negation-bert") model = AutoModelForSequenceClassification.from_pretrained("bvanaken/clinical-assertion-negation-bert") ``` The model expects input in the form of spans/sentences with one marked entity to classify as `PRESENT(0)`, `ABSENT(1)` or `POSSIBLE(2)`. The entity in question is identified with the special token `[entity]` surrounding it. Example input and inference: ``` input = "The patient recovered during the night and now denies any [entity] shortness of breath [entity]." classifier = TextClassificationPipeline(model=model, tokenizer=tokenizer) classification = classifier(input) # [{'label': 'ABSENT', 'score': 0.9842607378959656}] ``` ### Cite When working with the model, please cite our paper as follows: ```bibtex @inproceedings{van-aken-2021-assertion, title = "Assertion Detection in Clinical Notes: Medical Language Models to the Rescue?", author = "van Aken, Betty and Trajanovska, Ivana and Siu, Amy and Mayrdorfer, Manuel and Budde, Klemens and Loeser, Alexander", booktitle = "Proceedings of the Second Workshop on Natural Language Processing for Medical Conversations", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.nlpmc-1.5", doi = "10.18653/v1/2021.nlpmc-1.5" } ```
{"language": "en", "tags": ["bert", "medical", "clinical", "assertion", "negation", "text-classification"], "widget": [{"text": "Patient denies [entity] SOB [entity]."}]}
bvanaken/clinical-assertion-negation-bert
null
[ "transformers", "pytorch", "bert", "text-classification", "medical", "clinical", "assertion", "negation", "en", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #bert #text-classification #medical #clinical #assertion #negation #en #autotrain_compatible #endpoints_compatible #has_space #region-us
# Clinical Assertion / Negation Classification BERT ## Model description The Clinical Assertion and Negation Classification BERT is introduced in the paper Assertion Detection in Clinical Notes: Medical Language Models to the Rescue? . The model helps structure information in clinical patient letters by classifying medical conditions mentioned in the letter into PRESENT, ABSENT and POSSIBLE. The model is based on the ClinicalBERT - Bio + Discharge Summary BERT Model by Alsentzer et al. and fine-tuned on assertion data from the 2010 i2b2 challenge. #### How to use the model You can load the model via the transformers library: The model expects input in the form of spans/sentences with one marked entity to classify as 'PRESENT(0)', 'ABSENT(1)' or 'POSSIBLE(2)'. The entity in question is identified with the special token '[entity]' surrounding it. Example input and inference: ### Cite When working with the model, please cite our paper as follows:
[ "# Clinical Assertion / Negation Classification BERT", "## Model description\n\nThe Clinical Assertion and Negation Classification BERT is introduced in the paper Assertion Detection in Clinical Notes: Medical Language Models to the Rescue?\n. The model helps structure information in clinical patient letters by classifying medical conditions mentioned in the letter into PRESENT, ABSENT and POSSIBLE.\n\nThe model is based on the ClinicalBERT - Bio + Discharge Summary BERT Model by Alsentzer et al. and fine-tuned on assertion data from the 2010 i2b2 challenge.", "#### How to use the model\n\nYou can load the model via the transformers library:\n\n\nThe model expects input in the form of spans/sentences with one marked entity to classify as 'PRESENT(0)', 'ABSENT(1)' or 'POSSIBLE(2)'. The entity in question is identified with the special token '[entity]' surrounding it.\n\nExample input and inference:", "### Cite\n\nWhen working with the model, please cite our paper as follows:" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #medical #clinical #assertion #negation #en #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Clinical Assertion / Negation Classification BERT", "## Model description\n\nThe Clinical Assertion and Negation Classification BERT is introduced in the paper Assertion Detection in Clinical Notes: Medical Language Models to the Rescue?\n. The model helps structure information in clinical patient letters by classifying medical conditions mentioned in the letter into PRESENT, ABSENT and POSSIBLE.\n\nThe model is based on the ClinicalBERT - Bio + Discharge Summary BERT Model by Alsentzer et al. and fine-tuned on assertion data from the 2010 i2b2 challenge.", "#### How to use the model\n\nYou can load the model via the transformers library:\n\n\nThe model expects input in the form of spans/sentences with one marked entity to classify as 'PRESENT(0)', 'ABSENT(1)' or 'POSSIBLE(2)'. The entity in question is identified with the special token '[entity]' surrounding it.\n\nExample input and inference:", "### Cite\n\nWhen working with the model, please cite our paper as follows:" ]
automatic-speech-recognition
espnet
## Example ESPnet2 ASR model ### `Shinji Watanabe/librispeech_asr_train_asr_transformer_e18_raw_bpe_sp_valid.acc.best` ♻️ Imported from https://zenodo.org/record/3966501 This model was trained by Shinji Watanabe using librispeech recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech"]}
byan/librispeech_asr_train_asr_conformer_raw_bpe_batch_bins30000000_accum_grad3_optim_conflr0.001_sp
null
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:librispeech", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1804.00015" ]
[ "en" ]
TAGS #espnet #audio #automatic-speech-recognition #en #dataset-librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
## Example ESPnet2 ASR model ### 'Shinji Watanabe/librispeech_asr_train_asr_transformer_e18_raw_bpe_sp_valid.URL' ️ Imported from URL This model was trained by Shinji Watanabe using librispeech recipe in espnet. ### Demo: How to use in ESPnet2 ### Citing ESPnet or arXiv:
[ "## Example ESPnet2 ASR model", "### 'Shinji Watanabe/librispeech_asr_train_asr_transformer_e18_raw_bpe_sp_valid.URL'\n\n️ Imported from URL\n\nThis model was trained by Shinji Watanabe using librispeech recipe in espnet.", "### Demo: How to use in ESPnet2", "### Citing ESPnet\n\n\n\nor arXiv:" ]
[ "TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n", "## Example ESPnet2 ASR model", "### 'Shinji Watanabe/librispeech_asr_train_asr_transformer_e18_raw_bpe_sp_valid.URL'\n\n️ Imported from URL\n\nThis model was trained by Shinji Watanabe using librispeech recipe in espnet.", "### Demo: How to use in ESPnet2", "### Citing ESPnet\n\n\n\nor arXiv:" ]
automatic-speech-recognition
espnet
## Example ESPnet2 ASR model ### `Shinji Watanabe/librispeech_asr_train_asr_transformer_e18_raw_bpe_sp_valid.acc.best` ♻️ Imported from https://zenodo.org/record/3966501 This model was trained by Shinji Watanabe using librispeech recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech"]}
byan/librispeech_asr_train_asr_transformer_e18_raw_bpe_sp
null
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:librispeech", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1804.00015" ]
[ "en" ]
TAGS #espnet #audio #automatic-speech-recognition #en #dataset-librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
## Example ESPnet2 ASR model ### 'Shinji Watanabe/librispeech_asr_train_asr_transformer_e18_raw_bpe_sp_valid.URL' ️ Imported from URL This model was trained by Shinji Watanabe using librispeech recipe in espnet. ### Demo: How to use in ESPnet2 ### Citing ESPnet or arXiv:
[ "## Example ESPnet2 ASR model", "### 'Shinji Watanabe/librispeech_asr_train_asr_transformer_e18_raw_bpe_sp_valid.URL'\n\n️ Imported from URL\n\nThis model was trained by Shinji Watanabe using librispeech recipe in espnet.", "### Demo: How to use in ESPnet2", "### Citing ESPnet\n\n\n\nor arXiv:" ]
[ "TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n", "## Example ESPnet2 ASR model", "### 'Shinji Watanabe/librispeech_asr_train_asr_transformer_e18_raw_bpe_sp_valid.URL'\n\n️ Imported from URL\n\nThis model was trained by Shinji Watanabe using librispeech recipe in espnet.", "### Demo: How to use in ESPnet2", "### Citing ESPnet\n\n\n\nor arXiv:" ]
text-generation
transformers
## Ko-DialoGPT ### How to use ```python from transformers import PreTrainedTokenizerFast, GPT2LMHeadModel import torch device = 'cuda' if torch.cuda.is_available() else 'cpu' tokenizer = PreTrainedTokenizerFast.from_pretrained('byeongal/Ko-DialoGPT') model = GPT2LMHeadModel.from_pretrained('byeongal/Ko-DialoGPT').to(device) past_user_inputs = [] generated_responses = [] while True: user_input = input(">> User:") if user_input == 'bye': break text_idx = tokenizer.encode(user_input + tokenizer.eos_token, return_tensors='pt') for i in range(len(generated_responses)-1, len(generated_responses)-3, -1): if i < 0: break encoded_vector = tokenizer.encode(generated_responses[i] + tokenizer.eos_token, return_tensors='pt') if text_idx.shape[-1] + encoded_vector.shape[-1] < 1000: text_idx = torch.cat([encoded_vector, text_idx], dim=-1) else: break encoded_vector = tokenizer.encode(past_user_inputs[i] + tokenizer.eos_token, return_tensors='pt') if text_idx.shape[-1] + encoded_vector.shape[-1] < 1000: text_idx = torch.cat([encoded_vector, text_idx], dim=-1) else: break text_idx = text_idx.to(device) inference_output = model.generate( text_idx, max_length=1000, num_beams=5, top_k=20, no_repeat_ngram_size=4, length_penalty=0.65, repetition_penalty=2.0, ) inference_output = inference_output.tolist() bot_response = tokenizer.decode(inference_output[0][text_idx.shape[-1]:], skip_special_tokens=True) print(f"Bot: {bot_response}") past_user_inputs.append(user_input) generated_responses.append(bot_response) ``` ### Reference * [SKT-KoGPT2](https://huggingface.co/skt/kogpt2-base-v2) * [KETI R&D 데이터](https://aihub.or.kr/opendata/keti-data/recognition-laguage/KETI-02-008) * [한국어 대화 요약](https://aihub.or.kr/aidata/30714)
{"language": "ko", "license": "cc-by-nc-sa-4.0", "tags": ["gpt2", "conversational"]}
byeongal/Ko-DialoGPT
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "ko", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ko" ]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #ko #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
## Ko-DialoGPT ### How to use ### Reference * SKT-KoGPT2 * KETI R&D 데이터 * 한국어 대화 요약
[ "## Ko-DialoGPT", "### How to use", "### Reference\n* SKT-KoGPT2\n* KETI R&D 데이터\n* 한국어 대화 요약" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #ko #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "## Ko-DialoGPT", "### How to use", "### Reference\n* SKT-KoGPT2\n* KETI R&D 데이터\n* 한국어 대화 요약" ]
feature-extraction
transformers
# BART base model for Teachable NLP - This model forked from [bart-base](https://huggingface.co/facebook/bart-base) for fine tune [Teachable NLP](https://ainize.ai/teachable-nlp). The Bart model was proposed by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019. According to the abstract, Bart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT). The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE. The Authors’ code can be found here: https://github.com/pytorch/fairseq/tree/master/examples/bart
{"language": "en", "license": "mit", "tags": ["bart"], "thumbnail": "https://huggingface.co/front/thumbnails/facebook.png"}
byeongal/bart-base
null
[ "transformers", "pytorch", "bart", "feature-extraction", "en", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #bart #feature-extraction #en #license-mit #endpoints_compatible #region-us
# BART base model for Teachable NLP - This model forked from bart-base for fine tune Teachable NLP. The Bart model was proposed by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019. According to the abstract, Bart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT). The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE. The Authors’ code can be found here: URL
[ "# BART base model for Teachable NLP\n\n- This model forked from bart-base for fine tune Teachable NLP.\n\nThe Bart model was proposed by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019. According to the abstract,\n\nBart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT).\n\nThe pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token.\n\nBART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE.\n\nThe Authors’ code can be found here:\nURL" ]
[ "TAGS\n#transformers #pytorch #bart #feature-extraction #en #license-mit #endpoints_compatible #region-us \n", "# BART base model for Teachable NLP\n\n- This model forked from bart-base for fine tune Teachable NLP.\n\nThe Bart model was proposed by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019. According to the abstract,\n\nBart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT).\n\nThe pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token.\n\nBART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE.\n\nThe Authors’ code can be found here:\nURL" ]
feature-extraction
transformers
# BART base model for Teachable NLP - This model forked from [bart-base](https://huggingface.co/facebook/bart-base) for fine tune [Teachable NLP](https://ainize.ai/teachable-nlp). The Bart model was proposed by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019. According to the abstract, Bart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT). The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE. The Authors’ code can be found here: https://github.com/pytorch/fairseq/tree/master/examples/bart
{"language": "en", "license": "mit", "tags": ["bart"], "thumbnail": "https://huggingface.co/front/thumbnails/facebook.png"}
byeongal/bart-large
null
[ "transformers", "pytorch", "bart", "feature-extraction", "en", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #bart #feature-extraction #en #license-mit #endpoints_compatible #region-us
# BART base model for Teachable NLP - This model forked from bart-base for fine tune Teachable NLP. The Bart model was proposed by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019. According to the abstract, Bart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT). The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE. The Authors’ code can be found here: URL
[ "# BART base model for Teachable NLP\n\n- This model forked from bart-base for fine tune Teachable NLP.\n\nThe Bart model was proposed by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019. According to the abstract,\n\nBart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT).\n\nThe pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token.\n\nBART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE.\n\nThe Authors’ code can be found here:\nURL" ]
[ "TAGS\n#transformers #pytorch #bart #feature-extraction #en #license-mit #endpoints_compatible #region-us \n", "# BART base model for Teachable NLP\n\n- This model forked from bart-base for fine tune Teachable NLP.\n\nThe Bart model was proposed by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019. According to the abstract,\n\nBart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT).\n\nThe pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token.\n\nBART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE.\n\nThe Authors’ code can be found here:\nURL" ]
fill-mask
transformers
# BERT base model (uncased) for Teachable NLP - This model forked from [bert-base-uncased](https://huggingface.co/bert-base-uncased) for fine tune [Teachable NLP](https://ainize.ai/teachable-nlp). Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-uncased') >>> unmasker("Hello I'm a [MASK] model.") [{'sequence': "[CLS] hello i'm a fashion model. [SEP]", 'score': 0.1073106899857521, 'token': 4827, 'token_str': 'fashion'}, {'sequence': "[CLS] hello i'm a role model. [SEP]", 'score': 0.08774490654468536, 'token': 2535, 'token_str': 'role'}, {'sequence': "[CLS] hello i'm a new model. [SEP]", 'score': 0.05338378623127937, 'token': 2047, 'token_str': 'new'}, {'sequence': "[CLS] hello i'm a super model. [SEP]", 'score': 0.04667217284440994, 'token': 3565, 'token_str': 'super'}, {'sequence': "[CLS] hello i'm a fine model. [SEP]", 'score': 0.027095865458250046, 'token': 2986, 'token_str': 'fine'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertModel.from_pretrained("bert-base-uncased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = TFBertModel.from_pretrained("bert-base-uncased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-uncased') >>> unmasker("The man worked as a [MASK].") [{'sequence': '[CLS] the man worked as a carpenter. [SEP]', 'score': 0.09747550636529922, 'token': 10533, 'token_str': 'carpenter'}, {'sequence': '[CLS] the man worked as a waiter. [SEP]', 'score': 0.0523831807076931, 'token': 15610, 'token_str': 'waiter'}, {'sequence': '[CLS] the man worked as a barber. [SEP]', 'score': 0.04962705448269844, 'token': 13362, 'token_str': 'barber'}, {'sequence': '[CLS] the man worked as a mechanic. [SEP]', 'score': 0.03788609802722931, 'token': 15893, 'token_str': 'mechanic'}, {'sequence': '[CLS] the man worked as a salesman. [SEP]', 'score': 0.037680890411138535, 'token': 18968, 'token_str': 'salesman'}] >>> unmasker("The woman worked as a [MASK].") [{'sequence': '[CLS] the woman worked as a nurse. [SEP]', 'score': 0.21981462836265564, 'token': 6821, 'token_str': 'nurse'}, {'sequence': '[CLS] the woman worked as a waitress. [SEP]', 'score': 0.1597415804862976, 'token': 13877, 'token_str': 'waitress'}, {'sequence': '[CLS] the woman worked as a maid. [SEP]', 'score': 0.1154729500412941, 'token': 10850, 'token_str': 'maid'}, {'sequence': '[CLS] the woman worked as a prostitute. [SEP]', 'score': 0.037968918681144714, 'token': 19215, 'token_str': 'prostitute'}, {'sequence': '[CLS] the woman worked as a cook. [SEP]', 'score': 0.03042375110089779, 'token': 5660, 'token_str': 'cook'}] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta*{1} = 0.9\\) and \\(\beta*{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ## Evaluation results When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: | Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average | | :--: | :---------: | :--: | :--: | :---: | :--: | :---: | :--: | :--: | :-----: | | | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 | ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1810-04805, author = {Jacob Devlin and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language Understanding}, journal = {CoRR}, volume = {abs/1810.04805}, year = {2018}, url = {http://arxiv.org/abs/1810.04805}, archivePrefix = {arXiv}, eprint = {1810.04805}, timestamp = {Tue, 30 Oct 2018 20:39:56 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=bert-base-uncased"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
{"language": "en", "license": "apache-2.0", "tags": ["exbert"], "datasets": ["bookcorpus", "wikipedia"]}
byeongal/bert-base-uncased
null
[ "transformers", "pytorch", "bert", "fill-mask", "exbert", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1810.04805" ]
[ "en" ]
TAGS #transformers #pytorch #bert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
BERT base model (uncased) for Teachable NLP =========================================== * This model forked from bert-base-uncased for fine tune Teachable NLP. Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team. Model description ----------------- BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: * Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. * Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. Intended uses & limitations --------------------------- You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: This bias will also affect all fine-tuned versions of this model. Training data ------------- The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers). Training procedure ------------------ ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: * 15% of the tokens are masked. * In 80% of the cases, the masked tokens are replaced by '[MASK]'. * In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. * In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer used is Adam with a learning rate of 1e-4, \(\beta\*{1} = 0.9\) and \(\beta\*{2} = 0.999\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. Evaluation results ------------------ When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: ### BibTeX entry and citation info <a href="URL <img width="300px" src="URL
[ "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe BERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------", "### Preprocessing\n\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.", "### Pretraining\n\n\nThe model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size\nof 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer\nused is Adam with a learning rate of 1e-4, \\(\\beta\\*{1} = 0.9\\) and \\(\\beta\\*{2} = 0.999\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:\n\n\nGlue test results:", "### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL" ]
[ "TAGS\n#transformers #pytorch #bert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe BERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------", "### Preprocessing\n\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.", "### Pretraining\n\n\nThe model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size\nof 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer\nused is Adam with a learning rate of 1e-4, \\(\\beta\\*{1} = 0.9\\) and \\(\\beta\\*{2} = 0.999\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:\n\n\nGlue test results:", "### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL" ]
text-generation
transformers
# GPT-2 - This model forked from [gpt2](https://huggingface.co/gpt2-large) for fine tune [Teachable NLP](https://ainize.ai/teachable-nlp). Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at [this page](https://openai.com/blog/better-language-models/). Disclaimer: The team releasing GPT-2 also wrote a [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card has been written by the Hugging Face team to complete the information they provided and give specific examples of bias. ## Model description GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. ## Intended uses & limitations You can use the raw model for text generation or fine-tune it to a downstream task. See the [model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you. ### How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2-large') >>> set_seed(42) >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) [{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."}, {'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"}, {'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"}, {'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"}, {'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large') model = GPT2Model.from_pretrained('gpt2-large') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large') model = TFGPT2Model.from_pretrained('gpt2-large') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases): > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases > that require the generated text to be true. > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do > not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a > study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, > and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar > levels of caution around use cases that are sensitive to biases around human attributes. Here's an example of how the model can have biased predictions: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2-large') >>> set_seed(42) >>> generator("The White man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The White man worked as a mannequin for'}, {'generated_text': 'The White man worked as a maniser of the'}, {'generated_text': 'The White man worked as a bus conductor by day'}, {'generated_text': 'The White man worked as a plumber at the'}, {'generated_text': 'The White man worked as a journalist. He had'}] >>> set_seed(42) >>> generator("The Black man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The Black man worked as a man at a restaurant'}, {'generated_text': 'The Black man worked as a car salesman in a'}, {'generated_text': 'The Black man worked as a police sergeant at the'}, {'generated_text': 'The Black man worked as a man-eating monster'}, {'generated_text': 'The Black man worked as a slave, and was'}] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText [here](https://github.com/openai/gpt-2/blob/master/domains.txt). ## Training procedure ### Preprocessing The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact details of training. ## Evaluation results The model achieves the following results without any fine-tuning (zero-shot): | Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW | |:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:| | (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) | | | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 | ### BibTeX entry and citation info ```bibtex @article{radford2019language, title={Language Models are Unsupervised Multitask Learners}, author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya}, year={2019} } ``` <a href="https://huggingface.co/exbert/?model=gpt2"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
{"language": "en", "license": "mit", "tags": ["gpt2"]}
byeongal/gpt2-large
null
[ "transformers", "pytorch", "gpt2", "text-generation", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #gpt2 #text-generation #en #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
GPT-2 ===== * This model forked from gpt2 for fine tune Teachable NLP. Test the whole generation capabilities here: URL Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in this paper and first released at this page. Disclaimer: The team releasing GPT-2 also wrote a model card for their model. Content from this model card has been written by the Hugging Face team to complete the information they provided and give specific examples of bias. Model description ----------------- GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. Intended uses & limitations --------------------------- You can use the raw model for text generation or fine-tune it to a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: ### Limitations and bias The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their model card: > > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases > that require the generated text to be true. > > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do > not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a > study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, > and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar > levels of caution around use cases that are sensitive to biases around human attributes. > > > Here's an example of how the model can have biased predictions: This bias will also affect all fine-tuned versions of this model. Training data ------------- The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText here. Training procedure ------------------ ### Preprocessing The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact details of training. Evaluation results ------------------ The model achieves the following results without any fine-tuning (zero-shot): ### BibTeX entry and citation info <a href="URL <img width="300px" src="URL
[ "### How to use\n\n\nYou can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we\nset a seed for reproducibility:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of\nunfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their\nmodel card:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases\n> that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do\n> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a\n> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,\n> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar\n> levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nHere's an example of how the model can have biased predictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.\n\n\nTraining procedure\n------------------", "### Preprocessing\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nThe larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact\ndetails of training.\n\n\nEvaluation results\n------------------\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):", "### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #en #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we\nset a seed for reproducibility:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of\nunfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their\nmodel card:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases\n> that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do\n> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a\n> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,\n> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar\n> levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nHere's an example of how the model can have biased predictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.\n\n\nTraining procedure\n------------------", "### Preprocessing\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nThe larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact\ndetails of training.\n\n\nEvaluation results\n------------------\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):", "### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL" ]
text-generation
transformers
# GPT-2 - This model forked from [gpt2](https://huggingface.co/gpt2-medium) for fine tune [Teachable NLP](https://ainize.ai/teachable-nlp). Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at [this page](https://openai.com/blog/better-language-models/). Disclaimer: The team releasing GPT-2 also wrote a [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card has been written by the Hugging Face team to complete the information they provided and give specific examples of bias. ## Model description GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. ## Intended uses & limitations You can use the raw model for text generation or fine-tune it to a downstream task. See the [model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you. ### How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2-medium') >>> set_seed(42) >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) [{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."}, {'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"}, {'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"}, {'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"}, {'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium') model = GPT2Model.from_pretrained('gpt2-medium') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium') model = TFGPT2Model.from_pretrained('gpt2-medium') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases): > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases > that require the generated text to be true. > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do > not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a > study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, > and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar > levels of caution around use cases that are sensitive to biases around human attributes. Here's an example of how the model can have biased predictions: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2-medium') >>> set_seed(42) >>> generator("The White man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The White man worked as a mannequin for'}, {'generated_text': 'The White man worked as a maniser of the'}, {'generated_text': 'The White man worked as a bus conductor by day'}, {'generated_text': 'The White man worked as a plumber at the'}, {'generated_text': 'The White man worked as a journalist. He had'}] >>> set_seed(42) >>> generator("The Black man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The Black man worked as a man at a restaurant'}, {'generated_text': 'The Black man worked as a car salesman in a'}, {'generated_text': 'The Black man worked as a police sergeant at the'}, {'generated_text': 'The Black man worked as a man-eating monster'}, {'generated_text': 'The Black man worked as a slave, and was'}] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText [here](https://github.com/openai/gpt-2/blob/master/domains.txt). ## Training procedure ### Preprocessing The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact details of training. ## Evaluation results The model achieves the following results without any fine-tuning (zero-shot): | Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW | |:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:| | (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) | | | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 | ### BibTeX entry and citation info ```bibtex @article{radford2019language, title={Language Models are Unsupervised Multitask Learners}, author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya}, year={2019} } ``` <a href="https://huggingface.co/exbert/?model=gpt2"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
{"language": "en", "license": "mit", "tags": ["gpt2"]}
byeongal/gpt2-medium
null
[ "transformers", "pytorch", "gpt2", "text-generation", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #gpt2 #text-generation #en #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
GPT-2 ===== * This model forked from gpt2 for fine tune Teachable NLP. Test the whole generation capabilities here: URL Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in this paper and first released at this page. Disclaimer: The team releasing GPT-2 also wrote a model card for their model. Content from this model card has been written by the Hugging Face team to complete the information they provided and give specific examples of bias. Model description ----------------- GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. Intended uses & limitations --------------------------- You can use the raw model for text generation or fine-tune it to a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: ### Limitations and bias The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their model card: > > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases > that require the generated text to be true. > > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do > not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a > study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, > and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar > levels of caution around use cases that are sensitive to biases around human attributes. > > > Here's an example of how the model can have biased predictions: This bias will also affect all fine-tuned versions of this model. Training data ------------- The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText here. Training procedure ------------------ ### Preprocessing The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact details of training. Evaluation results ------------------ The model achieves the following results without any fine-tuning (zero-shot): ### BibTeX entry and citation info <a href="URL <img width="300px" src="URL
[ "### How to use\n\n\nYou can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we\nset a seed for reproducibility:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of\nunfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their\nmodel card:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases\n> that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do\n> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a\n> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,\n> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar\n> levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nHere's an example of how the model can have biased predictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.\n\n\nTraining procedure\n------------------", "### Preprocessing\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nThe larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact\ndetails of training.\n\n\nEvaluation results\n------------------\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):", "### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #en #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we\nset a seed for reproducibility:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of\nunfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their\nmodel card:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases\n> that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do\n> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a\n> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,\n> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar\n> levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nHere's an example of how the model can have biased predictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.\n\n\nTraining procedure\n------------------", "### Preprocessing\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nThe larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact\ndetails of training.\n\n\nEvaluation results\n------------------\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):", "### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL" ]
text-generation
transformers
# GPT-2 - This model forked from [gpt2](https://huggingface.co/gpt2) for fine tune [Teachable NLP](https://ainize.ai/teachable-nlp). Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at [this page](https://openai.com/blog/better-language-models/). Disclaimer: The team releasing GPT-2 also wrote a [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card has been written by the Hugging Face team to complete the information they provided and give specific examples of bias. ## Model description GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. ## Intended uses & limitations You can use the raw model for text generation or fine-tune it to a downstream task. See the [model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you. ### How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2') >>> set_seed(42) >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) [{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."}, {'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"}, {'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"}, {'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"}, {'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2Model.from_pretrained('gpt2') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = TFGPT2Model.from_pretrained('gpt2') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases): > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases > that require the generated text to be true. > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do > not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a > study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, > and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar > levels of caution around use cases that are sensitive to biases around human attributes. Here's an example of how the model can have biased predictions: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2') >>> set_seed(42) >>> generator("The White man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The White man worked as a mannequin for'}, {'generated_text': 'The White man worked as a maniser of the'}, {'generated_text': 'The White man worked as a bus conductor by day'}, {'generated_text': 'The White man worked as a plumber at the'}, {'generated_text': 'The White man worked as a journalist. He had'}] >>> set_seed(42) >>> generator("The Black man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The Black man worked as a man at a restaurant'}, {'generated_text': 'The Black man worked as a car salesman in a'}, {'generated_text': 'The Black man worked as a police sergeant at the'}, {'generated_text': 'The Black man worked as a man-eating monster'}, {'generated_text': 'The Black man worked as a slave, and was'}] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText [here](https://github.com/openai/gpt-2/blob/master/domains.txt). ## Training procedure ### Preprocessing The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact details of training. ## Evaluation results The model achieves the following results without any fine-tuning (zero-shot): | Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW | |:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:| | (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) | | | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 | ### BibTeX entry and citation info ```bibtex @article{radford2019language, title={Language Models are Unsupervised Multitask Learners}, author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya}, year={2019} } ``` <a href="https://huggingface.co/exbert/?model=gpt2"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
{"language": "en", "license": "mit", "tags": ["gpt2"]}
byeongal/gpt2
null
[ "transformers", "pytorch", "gpt2", "text-generation", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #gpt2 #text-generation #en #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
GPT-2 ===== * This model forked from gpt2 for fine tune Teachable NLP. Test the whole generation capabilities here: URL Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in this paper and first released at this page. Disclaimer: The team releasing GPT-2 also wrote a model card for their model. Content from this model card has been written by the Hugging Face team to complete the information they provided and give specific examples of bias. Model description ----------------- GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. Intended uses & limitations --------------------------- You can use the raw model for text generation or fine-tune it to a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: ### Limitations and bias The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their model card: > > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases > that require the generated text to be true. > > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do > not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a > study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, > and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar > levels of caution around use cases that are sensitive to biases around human attributes. > > > Here's an example of how the model can have biased predictions: This bias will also affect all fine-tuned versions of this model. Training data ------------- The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText here. Training procedure ------------------ ### Preprocessing The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact details of training. Evaluation results ------------------ The model achieves the following results without any fine-tuning (zero-shot): ### BibTeX entry and citation info <a href="URL <img width="300px" src="URL
[ "### How to use\n\n\nYou can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we\nset a seed for reproducibility:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of\nunfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their\nmodel card:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases\n> that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do\n> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a\n> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,\n> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar\n> levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nHere's an example of how the model can have biased predictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.\n\n\nTraining procedure\n------------------", "### Preprocessing\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nThe larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact\ndetails of training.\n\n\nEvaluation results\n------------------\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):", "### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #en #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we\nset a seed for reproducibility:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of\nunfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their\nmodel card:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases\n> that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do\n> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a\n> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,\n> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar\n> levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nHere's an example of how the model can have biased predictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.\n\n\nTraining procedure\n------------------", "### Preprocessing\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nThe larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact\ndetails of training.\n\n\nEvaluation results\n------------------\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):", "### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL" ]
feature-extraction
transformers
# kobart model for Teachable NLP - This model forked from [kobart](https://huggingface.co/hyunwoongko/kobart) for fine tune [Teachable NLP](https://ainize.ai/teachable-nlp).
{"language": "ko", "license": "mit", "tags": ["bart"]}
byeongal/kobart
null
[ "transformers", "pytorch", "bart", "feature-extraction", "ko", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ko" ]
TAGS #transformers #pytorch #bart #feature-extraction #ko #license-mit #endpoints_compatible #region-us
# kobart model for Teachable NLP - This model forked from kobart for fine tune Teachable NLP.
[ "# kobart model for Teachable NLP\n\n- This model forked from kobart for fine tune Teachable NLP." ]
[ "TAGS\n#transformers #pytorch #bart #feature-extraction #ko #license-mit #endpoints_compatible #region-us \n", "# kobart model for Teachable NLP\n\n- This model forked from kobart for fine tune Teachable NLP." ]
text-generation
transformers
# Michael Scott dialog model
{"tags": ["conversational"]}
bypequeno/DialoGPT-small-michaelscott
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Michael Scott dialog model
[ "# Michael Scott dialog model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Michael Scott dialog model" ]
text-generation
transformers
# GPT2 Fine Tuned on UrbanDictionary Honestly a little horrifying, but still funny. ## Usage Use with GPT2Tokenizer. Pad token should be set to the EOS token. Inputs should be of the form "define <your word>: ". ## Training Data All training data was obtained from [Urban Dictionary Words And Definitions on Kaggle](https://www.kaggle.com/therohk/urban-dictionary-words-dataset). Data was additionally filtered, normalized, and spell-checked. ## Bias This model was trained on public internet data and will almost definitely produce offensive results. Some efforts were made to reduce this (i.e definitions with ethnic / gender-based slurs were removed), but the final model should not be trusted to produce non-offensive definitions.
{}
cactode/gpt2_urbandict_textgen
null
[ "transformers", "pytorch", "tf", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #tf #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# GPT2 Fine Tuned on UrbanDictionary Honestly a little horrifying, but still funny. ## Usage Use with GPT2Tokenizer. Pad token should be set to the EOS token. Inputs should be of the form "define <your word>: ". ## Training Data All training data was obtained from Urban Dictionary Words And Definitions on Kaggle. Data was additionally filtered, normalized, and spell-checked. ## Bias This model was trained on public internet data and will almost definitely produce offensive results. Some efforts were made to reduce this (i.e definitions with ethnic / gender-based slurs were removed), but the final model should not be trusted to produce non-offensive definitions.
[ "# GPT2 Fine Tuned on UrbanDictionary\nHonestly a little horrifying, but still funny.", "## Usage\nUse with GPT2Tokenizer. Pad token should be set to the EOS token.\nInputs should be of the form \"define <your word>: \".", "## Training Data\nAll training data was obtained from Urban Dictionary Words And Definitions on Kaggle. Data was additionally filtered, normalized, and spell-checked.", "## Bias\nThis model was trained on public internet data and will almost definitely produce offensive results. Some efforts were made to reduce this (i.e definitions with ethnic / gender-based slurs were removed), but the final model should not be trusted to produce non-offensive definitions." ]
[ "TAGS\n#transformers #pytorch #tf #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# GPT2 Fine Tuned on UrbanDictionary\nHonestly a little horrifying, but still funny.", "## Usage\nUse with GPT2Tokenizer. Pad token should be set to the EOS token.\nInputs should be of the form \"define <your word>: \".", "## Training Data\nAll training data was obtained from Urban Dictionary Words And Definitions on Kaggle. Data was additionally filtered, normalized, and spell-checked.", "## Bias\nThis model was trained on public internet data and will almost definitely produce offensive results. Some efforts were made to reduce this (i.e definitions with ethnic / gender-based slurs were removed), but the final model should not be trusted to produce non-offensive definitions." ]
text-generation
transformers
# GPT2 Fine Tuned on UrbanDictionary Honestly a little horrifying, but still funny. ## Usage Use with GPT2Tokenizer. Pad token should be set to the EOS token. Inputs should be of the form "define <your word>: ". ## Training Data All training data was obtained from [Urban Dictionary Words And Definitions on Kaggle](https://www.kaggle.com/therohk/urban-dictionary-words-dataset). Data was additionally filtered, normalized, and spell-checked. ## Bias This model was trained on public internet data and will almost definitely produce offensive results. Some efforts were made to reduce this (i.e definitions with ethnic / gender-based slurs were removed), but the final model should not be trusted to produce non-offensive definitions.
{}
cactode/gpt2_urbandict_textgen_torch
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# GPT2 Fine Tuned on UrbanDictionary Honestly a little horrifying, but still funny. ## Usage Use with GPT2Tokenizer. Pad token should be set to the EOS token. Inputs should be of the form "define <your word>: ". ## Training Data All training data was obtained from Urban Dictionary Words And Definitions on Kaggle. Data was additionally filtered, normalized, and spell-checked. ## Bias This model was trained on public internet data and will almost definitely produce offensive results. Some efforts were made to reduce this (i.e definitions with ethnic / gender-based slurs were removed), but the final model should not be trusted to produce non-offensive definitions.
[ "# GPT2 Fine Tuned on UrbanDictionary\nHonestly a little horrifying, but still funny.", "## Usage\nUse with GPT2Tokenizer. Pad token should be set to the EOS token.\nInputs should be of the form \"define <your word>: \".", "## Training Data\nAll training data was obtained from Urban Dictionary Words And Definitions on Kaggle. Data was additionally filtered, normalized, and spell-checked.", "## Bias\nThis model was trained on public internet data and will almost definitely produce offensive results. Some efforts were made to reduce this (i.e definitions with ethnic / gender-based slurs were removed), but the final model should not be trusted to produce non-offensive definitions." ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# GPT2 Fine Tuned on UrbanDictionary\nHonestly a little horrifying, but still funny.", "## Usage\nUse with GPT2Tokenizer. Pad token should be set to the EOS token.\nInputs should be of the form \"define <your word>: \".", "## Training Data\nAll training data was obtained from Urban Dictionary Words And Definitions on Kaggle. Data was additionally filtered, normalized, and spell-checked.", "## Bias\nThis model was trained on public internet data and will almost definitely produce offensive results. Some efforts were made to reduce this (i.e definitions with ethnic / gender-based slurs were removed), but the final model should not be trusted to produce non-offensive definitions." ]
fill-mask
transformers
# Indonesian BERT base model (uncased) ## Model description It is BERT-base model pre-trained with indonesian Wikipedia and indonesian newspapers using a masked language modeling (MLM) objective. This model is uncased. This is one of several other language models that have been pre-trained with indonesian datasets. More detail about its usage on downstream tasks (text classification, text generation, etc) is available at [Transformer based Indonesian Language Models](https://github.com/cahya-wirawan/indonesian-language-models/tree/master/Transformers) ## Intended uses & limitations ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='cahya/bert-base-indonesian-1.5G') >>> unmasker("Ibu ku sedang bekerja [MASK] supermarket") [{'sequence': '[CLS] ibu ku sedang bekerja di supermarket [SEP]', 'score': 0.7983310222625732, 'token': 1495}, {'sequence': '[CLS] ibu ku sedang bekerja. supermarket [SEP]', 'score': 0.090003103017807, 'token': 17}, {'sequence': '[CLS] ibu ku sedang bekerja sebagai supermarket [SEP]', 'score': 0.025469014421105385, 'token': 1600}, {'sequence': '[CLS] ibu ku sedang bekerja dengan supermarket [SEP]', 'score': 0.017966199666261673, 'token': 1555}, {'sequence': '[CLS] ibu ku sedang bekerja untuk supermarket [SEP]', 'score': 0.016971781849861145, 'token': 1572}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel model_name='cahya/bert-base-indonesian-1.5G' tokenizer = BertTokenizer.from_pretrained(model_name) model = BertModel.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in Tensorflow: ```python from transformers import BertTokenizer, TFBertModel model_name='cahya/bert-base-indonesian-1.5G' tokenizer = BertTokenizer.from_pretrained(model_name) model = TFBertModel.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data This model was pre-trained with 522MB of indonesian Wikipedia and 1GB of [indonesian newspapers](https://huggingface.co/datasets/id_newspapers_2018). The texts are lowercased and tokenized using WordPiece and a vocabulary size of 32,000. The inputs of the model are then of the form: ```[CLS] Sentence A [SEP] Sentence B [SEP]```
{"language": "id", "license": "mit", "datasets": ["wikipedia", "id_newspapers_2018"], "widget": [{"text": "Ibu ku sedang bekerja [MASK] sawah."}]}
cahya/bert-base-indonesian-1.5G
null
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "id", "dataset:wikipedia", "dataset:id_newspapers_2018", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "id" ]
TAGS #transformers #pytorch #tf #jax #bert #fill-mask #id #dataset-wikipedia #dataset-id_newspapers_2018 #license-mit #autotrain_compatible #endpoints_compatible #region-us
# Indonesian BERT base model (uncased) ## Model description It is BERT-base model pre-trained with indonesian Wikipedia and indonesian newspapers using a masked language modeling (MLM) objective. This model is uncased. This is one of several other language models that have been pre-trained with indonesian datasets. More detail about its usage on downstream tasks (text classification, text generation, etc) is available at Transformer based Indonesian Language Models ## Intended uses & limitations ### How to use You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in Tensorflow: ## Training data This model was pre-trained with 522MB of indonesian Wikipedia and 1GB of indonesian newspapers. The texts are lowercased and tokenized using WordPiece and a vocabulary size of 32,000. The inputs of the model are then of the form:
[ "# Indonesian BERT base model (uncased)", "## Model description\nIt is BERT-base model pre-trained with indonesian Wikipedia and indonesian newspapers using a masked language modeling (MLM) objective. This \nmodel is uncased.\n\nThis is one of several other language models that have been pre-trained with indonesian datasets. More detail about \nits usage on downstream tasks (text classification, text generation, etc) is available at Transformer based Indonesian Language Models", "## Intended uses & limitations", "### How to use\nYou can use this model directly with a pipeline for masked language modeling:\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\nand in Tensorflow:", "## Training data\n\nThis model was pre-trained with 522MB of indonesian Wikipedia and 1GB of\nindonesian newspapers.\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 32,000. The inputs of the model are \nthen of the form:" ]
[ "TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #id #dataset-wikipedia #dataset-id_newspapers_2018 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# Indonesian BERT base model (uncased)", "## Model description\nIt is BERT-base model pre-trained with indonesian Wikipedia and indonesian newspapers using a masked language modeling (MLM) objective. This \nmodel is uncased.\n\nThis is one of several other language models that have been pre-trained with indonesian datasets. More detail about \nits usage on downstream tasks (text classification, text generation, etc) is available at Transformer based Indonesian Language Models", "## Intended uses & limitations", "### How to use\nYou can use this model directly with a pipeline for masked language modeling:\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\nand in Tensorflow:", "## Training data\n\nThis model was pre-trained with 522MB of indonesian Wikipedia and 1GB of\nindonesian newspapers.\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 32,000. The inputs of the model are \nthen of the form:" ]
fill-mask
transformers
# Indonesian BERT base model (uncased) ## Model description It is BERT-base model pre-trained with indonesian Wikipedia using a masked language modeling (MLM) objective. This model is uncased: it does not make a difference between indonesia and Indonesia. This is one of several other language models that have been pre-trained with indonesian datasets. More detail about its usage on downstream tasks (text classification, text generation, etc) is available at [Transformer based Indonesian Language Models](https://github.com/cahya-wirawan/indonesian-language-models/tree/master/Transformers) ## Intended uses & limitations ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='cahya/bert-base-indonesian-522M') >>> unmasker("Ibu ku sedang bekerja [MASK] supermarket") [{'sequence': '[CLS] ibu ku sedang bekerja di supermarket [SEP]', 'score': 0.7983310222625732, 'token': 1495}, {'sequence': '[CLS] ibu ku sedang bekerja. supermarket [SEP]', 'score': 0.090003103017807, 'token': 17}, {'sequence': '[CLS] ibu ku sedang bekerja sebagai supermarket [SEP]', 'score': 0.025469014421105385, 'token': 1600}, {'sequence': '[CLS] ibu ku sedang bekerja dengan supermarket [SEP]', 'score': 0.017966199666261673, 'token': 1555}, {'sequence': '[CLS] ibu ku sedang bekerja untuk supermarket [SEP]', 'score': 0.016971781849861145, 'token': 1572}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel model_name='cahya/bert-base-indonesian-522M' tokenizer = BertTokenizer.from_pretrained(model_name) model = BertModel.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in Tensorflow: ```python from transformers import BertTokenizer, TFBertModel model_name='cahya/bert-base-indonesian-522M' tokenizer = BertTokenizer.from_pretrained(model_name) model = TFBertModel.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data This model was pre-trained with 522MB of indonesian Wikipedia. The texts are lowercased and tokenized using WordPiece and a vocabulary size of 32,000. The inputs of the model are then of the form: ```[CLS] Sentence A [SEP] Sentence B [SEP]```
{"language": "id", "license": "mit", "datasets": ["wikipedia"], "widget": [{"text": "Ibu ku sedang bekerja [MASK] sawah."}]}
cahya/bert-base-indonesian-522M
null
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "id", "dataset:wikipedia", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "id" ]
TAGS #transformers #pytorch #tf #jax #bert #fill-mask #id #dataset-wikipedia #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
# Indonesian BERT base model (uncased) ## Model description It is BERT-base model pre-trained with indonesian Wikipedia using a masked language modeling (MLM) objective. This model is uncased: it does not make a difference between indonesia and Indonesia. This is one of several other language models that have been pre-trained with indonesian datasets. More detail about its usage on downstream tasks (text classification, text generation, etc) is available at Transformer based Indonesian Language Models ## Intended uses & limitations ### How to use You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in Tensorflow: ## Training data This model was pre-trained with 522MB of indonesian Wikipedia. The texts are lowercased and tokenized using WordPiece and a vocabulary size of 32,000. The inputs of the model are then of the form:
[ "# Indonesian BERT base model (uncased)", "## Model description\nIt is BERT-base model pre-trained with indonesian Wikipedia using a masked language modeling (MLM) objective. This \nmodel is uncased: it does not make a difference between indonesia and Indonesia.\n\nThis is one of several other language models that have been pre-trained with indonesian datasets. More detail about \nits usage on downstream tasks (text classification, text generation, etc) is available at Transformer based Indonesian Language Models", "## Intended uses & limitations", "### How to use\nYou can use this model directly with a pipeline for masked language modeling:\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\nand in Tensorflow:", "## Training data\n\nThis model was pre-trained with 522MB of indonesian Wikipedia.\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 32,000. The inputs of the model are \nthen of the form:" ]
[ "TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #id #dataset-wikipedia #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Indonesian BERT base model (uncased)", "## Model description\nIt is BERT-base model pre-trained with indonesian Wikipedia using a masked language modeling (MLM) objective. This \nmodel is uncased: it does not make a difference between indonesia and Indonesia.\n\nThis is one of several other language models that have been pre-trained with indonesian datasets. More detail about \nits usage on downstream tasks (text classification, text generation, etc) is available at Transformer based Indonesian Language Models", "## Intended uses & limitations", "### How to use\nYou can use this model directly with a pipeline for masked language modeling:\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\nand in Tensorflow:", "## Training data\n\nThis model was pre-trained with 522MB of indonesian Wikipedia.\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 32,000. The inputs of the model are \nthen of the form:" ]
summarization
transformers
# Indonesian BERT2BERT Summarization Model Finetuned BERT-base summarization model for Indonesian. ## Finetuning Corpus `bert2bert-indonesian-summarization` model is based on `cahya/bert-base-indonesian-1.5G` by [cahya](https://huggingface.co/cahya), finetuned using [id_liputan6](https://huggingface.co/datasets/id_liputan6) dataset. ## Load Finetuned Model ```python from transformers import BertTokenizer, EncoderDecoderModel tokenizer = BertTokenizer.from_pretrained("cahya/bert2bert-indonesian-summarization") tokenizer.bos_token = tokenizer.cls_token tokenizer.eos_token = tokenizer.sep_token model = EncoderDecoderModel.from_pretrained("cahya/bert2bert-indonesian-summarization") ``` ## Code Sample ```python from transformers import BertTokenizer, EncoderDecoderModel tokenizer = BertTokenizer.from_pretrained("cahya/bert2bert-indonesian-summarization") tokenizer.bos_token = tokenizer.cls_token tokenizer.eos_token = tokenizer.sep_token model = EncoderDecoderModel.from_pretrained("cahya/bert2bert-indonesian-summarization") # ARTICLE_TO_SUMMARIZE = "" # generate summary input_ids = tokenizer.encode(ARTICLE_TO_SUMMARIZE, return_tensors='pt') summary_ids = model.generate(input_ids, min_length=20, max_length=80, num_beams=10, repetition_penalty=2.5, length_penalty=1.0, early_stopping=True, no_repeat_ngram_size=2, use_cache=True, do_sample = True, temperature = 0.8, top_k = 50, top_p = 0.95) summary_text = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print(summary_text) ``` Output: ``` ```
{"language": "id", "license": "apache-2.0", "tags": ["pipeline:summarization", "summarization", "bert2bert"], "datasets": ["id_liputan6"]}
cahya/bert2bert-indonesian-summarization
null
[ "transformers", "pytorch", "encoder-decoder", "text2text-generation", "pipeline:summarization", "summarization", "bert2bert", "id", "dataset:id_liputan6", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "id" ]
TAGS #transformers #pytorch #encoder-decoder #text2text-generation #pipeline-summarization #summarization #bert2bert #id #dataset-id_liputan6 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Indonesian BERT2BERT Summarization Model Finetuned BERT-base summarization model for Indonesian. ## Finetuning Corpus 'bert2bert-indonesian-summarization' model is based on 'cahya/bert-base-indonesian-1.5G' by cahya, finetuned using id_liputan6 dataset. ## Load Finetuned Model ## Code Sample Output:
[ "# Indonesian BERT2BERT Summarization Model\n\nFinetuned BERT-base summarization model for Indonesian.", "## Finetuning Corpus\n\n'bert2bert-indonesian-summarization' model is based on 'cahya/bert-base-indonesian-1.5G' by cahya, finetuned using id_liputan6 dataset.", "## Load Finetuned Model", "## Code Sample\n\n\n\nOutput:" ]
[ "TAGS\n#transformers #pytorch #encoder-decoder #text2text-generation #pipeline-summarization #summarization #bert2bert #id #dataset-id_liputan6 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Indonesian BERT2BERT Summarization Model\n\nFinetuned BERT-base summarization model for Indonesian.", "## Finetuning Corpus\n\n'bert2bert-indonesian-summarization' model is based on 'cahya/bert-base-indonesian-1.5G' by cahya, finetuned using id_liputan6 dataset.", "## Load Finetuned Model", "## Code Sample\n\n\n\nOutput:" ]
summarization
transformers
# Indonesian BERT2BERT Summarization Model Finetuned EncoderDecoder model using BERT-base and GPT2-small for Indonesian text summarization. ## Finetuning Corpus `bert2gpt-indonesian-summarization` model is based on `cahya/bert-base-indonesian-1.5G` and `cahya/gpt2-small-indonesian-522M`by [cahya](https://huggingface.co/cahya), finetuned using [id_liputan6](https://huggingface.co/datasets/id_liputan6) dataset. ## Load Finetuned Model ```python from transformers import BertTokenizer, EncoderDecoderModel tokenizer = BertTokenizer.from_pretrained("cahya/bert2gpt-indonesian-summarization") tokenizer.bos_token = tokenizer.cls_token tokenizer.eos_token = tokenizer.sep_token model = EncoderDecoderModel.from_pretrained("cahya/bert2gpt-indonesian-summarization") ``` ## Code Sample ```python from transformers import BertTokenizer, EncoderDecoderModel tokenizer = BertTokenizer.from_pretrained("cahya/bert2gpt-indonesian-summarization") tokenizer.bos_token = tokenizer.cls_token tokenizer.eos_token = tokenizer.sep_token model = EncoderDecoderModel.from_pretrained("cahya/bert2gpt-indonesian-summarization") # ARTICLE_TO_SUMMARIZE = "" # generate summary input_ids = tokenizer.encode(ARTICLE_TO_SUMMARIZE, return_tensors='pt') summary_ids = model.generate(input_ids, min_length=20, max_length=80, num_beams=10, repetition_penalty=2.5, length_penalty=1.0, early_stopping=True, no_repeat_ngram_size=2, use_cache=True, do_sample = True, temperature = 0.8, top_k = 50, top_p = 0.95) summary_text = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print(summary_text) ``` Output: ``` ```
{"language": "id", "license": "apache-2.0", "tags": ["pipeline:summarization", "summarization", "bert2gpt"], "datasets": ["id_liputan6"]}
cahya/bert2gpt-indonesian-summarization
null
[ "transformers", "pytorch", "encoder-decoder", "text2text-generation", "pipeline:summarization", "summarization", "bert2gpt", "id", "dataset:id_liputan6", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "id" ]
TAGS #transformers #pytorch #encoder-decoder #text2text-generation #pipeline-summarization #summarization #bert2gpt #id #dataset-id_liputan6 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Indonesian BERT2BERT Summarization Model Finetuned EncoderDecoder model using BERT-base and GPT2-small for Indonesian text summarization. ## Finetuning Corpus 'bert2gpt-indonesian-summarization' model is based on 'cahya/bert-base-indonesian-1.5G' and 'cahya/gpt2-small-indonesian-522M'by cahya, finetuned using id_liputan6 dataset. ## Load Finetuned Model ## Code Sample Output:
[ "# Indonesian BERT2BERT Summarization Model\n\nFinetuned EncoderDecoder model using BERT-base and GPT2-small for Indonesian text summarization.", "## Finetuning Corpus\n\n'bert2gpt-indonesian-summarization' model is based on 'cahya/bert-base-indonesian-1.5G' and 'cahya/gpt2-small-indonesian-522M'by cahya, finetuned using id_liputan6 dataset.", "## Load Finetuned Model", "## Code Sample\n\n\n\nOutput:" ]
[ "TAGS\n#transformers #pytorch #encoder-decoder #text2text-generation #pipeline-summarization #summarization #bert2gpt #id #dataset-id_liputan6 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Indonesian BERT2BERT Summarization Model\n\nFinetuned EncoderDecoder model using BERT-base and GPT2-small for Indonesian text summarization.", "## Finetuning Corpus\n\n'bert2gpt-indonesian-summarization' model is based on 'cahya/bert-base-indonesian-1.5G' and 'cahya/gpt2-small-indonesian-522M'by cahya, finetuned using id_liputan6 dataset.", "## Load Finetuned Model", "## Code Sample\n\n\n\nOutput:" ]
fill-mask
transformers
# Indonesian DistilBERT base model (uncased) ## Model description This model is a distilled version of the [Indonesian BERT base model](https://huggingface.co/cahya/bert-base-indonesian-1.5G). This model is uncased. This is one of several other language models that have been pre-trained with indonesian datasets. More detail about its usage on downstream tasks (text classification, text generation, etc) is available at [Transformer based Indonesian Language Models](https://github.com/cahya-wirawan/indonesian-language-models/tree/master/Transformers) ## Intended uses & limitations ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='cahya/distilbert-base-indonesian') >>> unmasker("Ayahku sedang bekerja di sawah untuk [MASK] padi") [ { "sequence": "[CLS] ayahku sedang bekerja di sawah untuk menanam padi [SEP]", "score": 0.6853187084197998, "token": 12712, "token_str": "menanam" }, { "sequence": "[CLS] ayahku sedang bekerja di sawah untuk bertani padi [SEP]", "score": 0.03739545866847038, "token": 15484, "token_str": "bertani" }, { "sequence": "[CLS] ayahku sedang bekerja di sawah untuk memetik padi [SEP]", "score": 0.02742469497025013, "token": 30338, "token_str": "memetik" }, { "sequence": "[CLS] ayahku sedang bekerja di sawah untuk penggilingan padi [SEP]", "score": 0.02214187942445278, "token": 28252, "token_str": "penggilingan" }, { "sequence": "[CLS] ayahku sedang bekerja di sawah untuk tanam padi [SEP]", "score": 0.0185895636677742, "token": 11308, "token_str": "tanam" } ] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import DistilBertTokenizer, DistilBertModel model_name='cahya/distilbert-base-indonesian' tokenizer = DistilBertTokenizer.from_pretrained(model_name) model = DistilBertModel.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in Tensorflow: ```python from transformers import DistilBertTokenizer, TFDistilBertModel model_name='cahya/distilbert-base-indonesian' tokenizer = DistilBertTokenizer.from_pretrained(model_name) model = TFDistilBertModel.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data This model was distiled with 522MB of indonesian Wikipedia and 1GB of [indonesian newspapers](https://huggingface.co/datasets/id_newspapers_2018). The texts are lowercased and tokenized using WordPiece and a vocabulary size of 32,000. The inputs of the model are then of the form: ```[CLS] Sentence A [SEP] Sentence B [SEP]```
{"language": "id", "license": "mit", "datasets": ["wikipedia", "id_newspapers_2018"], "widget": [{"text": "ayahku sedang bekerja di sawah untuk [MASK] padi."}]}
cahya/distilbert-base-indonesian
null
[ "transformers", "pytorch", "distilbert", "fill-mask", "id", "dataset:wikipedia", "dataset:id_newspapers_2018", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "id" ]
TAGS #transformers #pytorch #distilbert #fill-mask #id #dataset-wikipedia #dataset-id_newspapers_2018 #license-mit #autotrain_compatible #endpoints_compatible #region-us
# Indonesian DistilBERT base model (uncased) ## Model description This model is a distilled version of the Indonesian BERT base model. This model is uncased. This is one of several other language models that have been pre-trained with indonesian datasets. More detail about its usage on downstream tasks (text classification, text generation, etc) is available at Transformer based Indonesian Language Models ## Intended uses & limitations ### How to use You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in Tensorflow: ## Training data This model was distiled with 522MB of indonesian Wikipedia and 1GB of indonesian newspapers. The texts are lowercased and tokenized using WordPiece and a vocabulary size of 32,000. The inputs of the model are then of the form:
[ "# Indonesian DistilBERT base model (uncased)", "## Model description\nThis model is a distilled version of the Indonesian BERT base model.\nThis model is uncased.\n\nThis is one of several other language models that have been pre-trained with indonesian datasets. More detail about \nits usage on downstream tasks (text classification, text generation, etc) is available at Transformer based Indonesian Language Models", "## Intended uses & limitations", "### How to use\nYou can use this model directly with a pipeline for masked language modeling:\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\nand in Tensorflow:", "## Training data\n\nThis model was distiled with 522MB of indonesian Wikipedia and 1GB of\nindonesian newspapers.\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 32,000. The inputs of the model are \nthen of the form:" ]
[ "TAGS\n#transformers #pytorch #distilbert #fill-mask #id #dataset-wikipedia #dataset-id_newspapers_2018 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# Indonesian DistilBERT base model (uncased)", "## Model description\nThis model is a distilled version of the Indonesian BERT base model.\nThis model is uncased.\n\nThis is one of several other language models that have been pre-trained with indonesian datasets. More detail about \nits usage on downstream tasks (text classification, text generation, etc) is available at Transformer based Indonesian Language Models", "## Intended uses & limitations", "### How to use\nYou can use this model directly with a pipeline for masked language modeling:\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\nand in Tensorflow:", "## Training data\n\nThis model was distiled with 522MB of indonesian Wikipedia and 1GB of\nindonesian newspapers.\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 32,000. The inputs of the model are \nthen of the form:" ]
text-generation
transformers
# Indonesian GPT2 small model ## Model description It is GPT2-small model pre-trained with indonesian Wikipedia using a causal language modeling (CLM) objective. This model is uncased: it does not make a difference between indonesia and Indonesia. This is one of several other language models that have been pre-trained with indonesian datasets. More detail about its usage on downstream tasks (text classification, text generation, etc) is available at [Transformer based Indonesian Language Models](https://github.com/cahya-wirawan/indonesian-language-models/tree/master/Transformers) ## Intended uses & limitations ### How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='cahya/gpt2-small-indonesian-522M') >>> set_seed(42) >>> generator("Kerajaan Majapahit adalah", max_length=30, num_return_sequences=5, num_beams=10) [{'generated_text': 'Kerajaan Majapahit adalah sebuah kerajaan yang pernah berdiri di Jawa Timur pada abad ke-14 hingga abad ke-15. Kerajaan ini berdiri pada abad ke-14'}, {'generated_text': 'Kerajaan Majapahit adalah sebuah kerajaan yang pernah berdiri di Jawa Timur pada abad ke-14 hingga abad ke-16. Kerajaan ini berdiri pada abad ke-14'}, {'generated_text': 'Kerajaan Majapahit adalah sebuah kerajaan yang pernah berdiri di Jawa Timur pada abad ke-14 hingga abad ke-15. Kerajaan ini berdiri pada abad ke-15'}, {'generated_text': 'Kerajaan Majapahit adalah sebuah kerajaan yang pernah berdiri di Jawa Timur pada abad ke-14 hingga abad ke-16. Kerajaan ini berdiri pada abad ke-15'}, {'generated_text': 'Kerajaan Majapahit adalah sebuah kerajaan yang pernah berdiri di Jawa Timur pada abad ke-14 hingga abad ke-15. Kerajaan ini merupakan kelanjutan dari Kerajaan Majapahit yang'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model model_name='cahya/gpt2-small-indonesian-522M' tokenizer = GPT2Tokenizer.from_pretrained(model_name) model = GPT2Model.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in Tensorflow: ```python from transformers import GPT2Tokenizer, TFGPT2Model model_name='cahya/gpt2-small-indonesian-522M' tokenizer = GPT2Tokenizer.from_pretrained(model_name) model = TFGPT2Model.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data This model was pre-trained with 522MB of indonesian Wikipedia. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 52,000. The inputs are sequences of 128 consecutive tokens.
{"language": "id", "license": "mit", "datasets": ["Indonesian Wikipedia"], "widget": [{"text": "Pulau Dewata sering dikunjungi"}]}
cahya/gpt2-small-indonesian-522M
null
[ "transformers", "pytorch", "tf", "jax", "gpt2", "text-generation", "id", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "id" ]
TAGS #transformers #pytorch #tf #jax #gpt2 #text-generation #id #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# Indonesian GPT2 small model ## Model description It is GPT2-small model pre-trained with indonesian Wikipedia using a causal language modeling (CLM) objective. This model is uncased: it does not make a difference between indonesia and Indonesia. This is one of several other language models that have been pre-trained with indonesian datasets. More detail about its usage on downstream tasks (text classification, text generation, etc) is available at Transformer based Indonesian Language Models ## Intended uses & limitations ### How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: Here is how to use this model to get the features of a given text in PyTorch: and in Tensorflow: ## Training data This model was pre-trained with 522MB of indonesian Wikipedia. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 52,000. The inputs are sequences of 128 consecutive tokens.
[ "# Indonesian GPT2 small model", "## Model description\nIt is GPT2-small model pre-trained with indonesian Wikipedia using a causal language modeling (CLM) objective. This \nmodel is uncased: it does not make a difference between indonesia and Indonesia.\n\nThis is one of several other language models that have been pre-trained with indonesian datasets. More detail about \nits usage on downstream tasks (text classification, text generation, etc) is available at Transformer based Indonesian Language Models", "## Intended uses & limitations", "### How to use\nYou can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, \nwe set a seed for reproducibility:\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\nand in Tensorflow:", "## Training data\n\nThis model was pre-trained with 522MB of indonesian Wikipedia.\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and \na vocabulary size of 52,000. The inputs are sequences of 128 consecutive tokens." ]
[ "TAGS\n#transformers #pytorch #tf #jax #gpt2 #text-generation #id #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# Indonesian GPT2 small model", "## Model description\nIt is GPT2-small model pre-trained with indonesian Wikipedia using a causal language modeling (CLM) objective. This \nmodel is uncased: it does not make a difference between indonesia and Indonesia.\n\nThis is one of several other language models that have been pre-trained with indonesian datasets. More detail about \nits usage on downstream tasks (text classification, text generation, etc) is available at Transformer based Indonesian Language Models", "## Intended uses & limitations", "### How to use\nYou can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, \nwe set a seed for reproducibility:\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\nand in Tensorflow:", "## Training data\n\nThis model was pre-trained with 522MB of indonesian Wikipedia.\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and \na vocabulary size of 52,000. The inputs are sequences of 128 consecutive tokens." ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [cahya/wav2vec2-base-turkish-artificial-cv](https://huggingface.co/cahya/wav2vec2-base-turkish-artificial-cv) on the COMMON_VOICE - TR dataset. It achieves the following results on the evaluation set: - Loss: 0.1822 - Wer: 0.1423 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-07 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
{"language": ["tr"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "common_voice", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "output", "results": []}]}
cahya/output
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "common_voice", "generated_from_trainer", "tr", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "tr" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #tr #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
# output This model is a fine-tuned version of cahya/wav2vec2-base-turkish-artificial-cv on the COMMON_VOICE - TR dataset. It achieves the following results on the evaluation set: - Loss: 0.1822 - Wer: 0.1423 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-07 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
[ "# output\n\nThis model is a fine-tuned version of cahya/wav2vec2-base-turkish-artificial-cv on the COMMON_VOICE - TR dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.1822\n- Wer: 0.1423", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 7.5e-07\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2000\n- num_epochs: 1.0", "### Training results", "### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.18.2\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #tr #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n", "# output\n\nThis model is a fine-tuned version of cahya/wav2vec2-base-turkish-artificial-cv on the COMMON_VOICE - TR dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.1822\n- Wer: 0.1423", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 7.5e-07\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2000\n- num_epochs: 1.0", "### Training results", "### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.18.2\n- Tokenizers 0.10.3" ]
fill-mask
transformers
# Indonesian RoBERTa base model (uncased) ## Model description It is RoBERTa-base model pre-trained with indonesian Wikipedia using a masked language modeling (MLM) objective. This model is uncased: it does not make a difference between indonesia and Indonesia. This is one of several other language models that have been pre-trained with indonesian datasets. More detail about its usage on downstream tasks (text classification, text generation, etc) is available at [Transformer based Indonesian Language Models](https://github.com/cahya-wirawan/indonesian-language-models/tree/master/Transformers) ## Intended uses & limitations ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='cahya/roberta-base-indonesian-522M') >>> unmasker("Ibu ku sedang bekerja <mask> supermarket") ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel model_name='cahya/roberta-base-indonesian-522M' tokenizer = RobertaTokenizer.from_pretrained(model_name) model = RobertaModel.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in Tensorflow: ```python from transformers import RobertaTokenizer, TFRobertaModel model_name='cahya/roberta-base-indonesian-522M' tokenizer = RobertaTokenizer.from_pretrained(model_name) model = TFRobertaModel.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data This model was pre-trained with 522MB of indonesian Wikipedia. The texts are lowercased and tokenized using WordPiece and a vocabulary size of 32,000. The inputs of the model are then of the form: ```<s> Sentence A </s> Sentence B </s>```
{"language": "id", "license": "mit", "datasets": ["Indonesian Wikipedia"], "widget": [{"text": "Ibu ku sedang bekerja <mask> supermarket."}]}
cahya/roberta-base-indonesian-522M
null
[ "transformers", "pytorch", "tf", "jax", "roberta", "fill-mask", "id", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "id" ]
TAGS #transformers #pytorch #tf #jax #roberta #fill-mask #id #license-mit #autotrain_compatible #endpoints_compatible #region-us
# Indonesian RoBERTa base model (uncased) ## Model description It is RoBERTa-base model pre-trained with indonesian Wikipedia using a masked language modeling (MLM) objective. This model is uncased: it does not make a difference between indonesia and Indonesia. This is one of several other language models that have been pre-trained with indonesian datasets. More detail about its usage on downstream tasks (text classification, text generation, etc) is available at Transformer based Indonesian Language Models ## Intended uses & limitations ### How to use You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in Tensorflow: ## Training data This model was pre-trained with 522MB of indonesian Wikipedia. The texts are lowercased and tokenized using WordPiece and a vocabulary size of 32,000. The inputs of the model are then of the form:
[ "# Indonesian RoBERTa base model (uncased)", "## Model description\nIt is RoBERTa-base model pre-trained with indonesian Wikipedia using a masked language modeling (MLM) objective. This \nmodel is uncased: it does not make a difference between indonesia and Indonesia.\n\nThis is one of several other language models that have been pre-trained with indonesian datasets. More detail about \nits usage on downstream tasks (text classification, text generation, etc) is available at Transformer based Indonesian Language Models", "## Intended uses & limitations", "### How to use\nYou can use this model directly with a pipeline for masked language modeling:\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\nand in Tensorflow:", "## Training data\n\nThis model was pre-trained with 522MB of indonesian Wikipedia.\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 32,000. The inputs of the model are \nthen of the form:" ]
[ "TAGS\n#transformers #pytorch #tf #jax #roberta #fill-mask #id #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# Indonesian RoBERTa base model (uncased)", "## Model description\nIt is RoBERTa-base model pre-trained with indonesian Wikipedia using a masked language modeling (MLM) objective. This \nmodel is uncased: it does not make a difference between indonesia and Indonesia.\n\nThis is one of several other language models that have been pre-trained with indonesian datasets. More detail about \nits usage on downstream tasks (text classification, text generation, etc) is available at Transformer based Indonesian Language Models", "## Intended uses & limitations", "### How to use\nYou can use this model directly with a pipeline for masked language modeling:\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\nand in Tensorflow:", "## Training data\n\nThis model was pre-trained with 522MB of indonesian Wikipedia.\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 32,000. The inputs of the model are \nthen of the form:" ]