modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-01 18:27:28
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
532 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-01 18:27:19
card
stringlengths
11
1.01M
razent/spbert-mlm-zero
razent
2022-03-15T03:24:45Z
3
2
transformers
[ "transformers", "pytorch", "tf", "jax", "question-answering", "knowledge-graph", "code", "arxiv:2106.09997", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: - code tags: - question-answering - knowledge-graph --- # SPBERT MLM (Scratch) ## Introduction Paper: [SPBERT: An Efficient Pre-training BERT on SPARQL Queries for Question Answering over Knowledge Graphs](https://arxiv.org/abs/2106.09997) Authors: _Hieu Tran, Long Phan, James Anibal, Binh T. Nguyen, Truong-Son Nguyen_ ## How to use For more details, do check out [our Github repo](https://github.com/heraclex12/NLP2SPARQL). Here is an example in Pytorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('razent/spbert-mlm-zero') model = AutoModel.from_pretrained("razent/spbert-mlm-zero") text = "select * where brack_open var_a var_b var_c sep_dot brack_close" encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` or Tensorflow ```python from transformers import AutoTokenizer, TFAutoModel tokenizer = AutoTokenizer.from_pretrained('razent/spbert-mlm-zero') model = TFAutoModel.from_pretrained("razent/spbert-mlm-zero") text = "select * where brack_open var_a var_b var_c sep_dot brack_close" encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Citation ``` @misc{tran2021spbert, title={SPBERT: An Efficient Pre-training BERT on SPARQL Queries for Question Answering over Knowledge Graphs}, author={Hieu Tran and Long Phan and James Anibal and Binh T. Nguyen and Truong-Son Nguyen}, year={2021}, eprint={2106.09997}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
aaraki/distilbert-base-uncased-finetuned-squad
aaraki
2022-03-15T00:52:37Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-14T08:42:50Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.2248 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.2636 | 1.0 | 5533 | 1.2248 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_English
StivenLancheros
2022-03-14T23:42:29Z
3
1
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-14T22:56:59Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: biobert-base-cased-v1.2-finetuned-ner-CRAFT_English results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # biobert-base-cased-v1.2-finetuned-ner-CRAFT_English This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1614 - Precision: 0.8585 - Recall: 0.8623 - F1: 0.8604 - Accuracy: 0.9724 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0725 | 1.0 | 1360 | 0.1242 | 0.8090 | 0.8698 | 0.8383 | 0.9681 | | 0.0281 | 2.0 | 2720 | 0.1541 | 0.8497 | 0.8549 | 0.8523 | 0.9705 | | 0.0162 | 3.0 | 4080 | 0.1510 | 0.8390 | 0.8681 | 0.8533 | 0.9711 | | 0.0053 | 4.0 | 5440 | 0.1614 | 0.8585 | 0.8623 | 0.8604 | 0.9724 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
peterhsu/codeparrot-ds
peterhsu
2022-03-14T23:00:48Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-14T15:52:25Z
--- license: mit tags: - generated_from_trainer model-index: - name: codeparrot-ds results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codeparrot-ds This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9729 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.4939 | 0.93 | 5000 | 1.9729 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
sanchit-gandhi/wav2vec2-2-bart-large-no-adapter
sanchit-gandhi
2022-03-14T21:45:57Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "speech-encoder-decoder", "automatic-speech-recognition", "generated_from_trainer", "dataset:librispeech_asr", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-14T12:33:35Z
--- tags: - generated_from_trainer datasets: - librispeech_asr model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model was trained from scratch on the librispeech_asr dataset. It achieves the following results on the evaluation set: - Loss: 5.6120 - Wer: 1.0267 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6.7189 | 0.56 | 500 | 6.9796 | 0.9350 | | 6.5068 | 1.12 | 1000 | 6.4823 | 1.3923 | | 6.4601 | 1.68 | 1500 | 6.1801 | 1.1578 | | 6.1802 | 2.24 | 2000 | 6.0002 | 1.7750 | | 6.0888 | 2.8 | 2500 | 5.8453 | 1.7581 | | 6.0993 | 3.36 | 3000 | 5.7702 | 1.4096 | | 6.0851 | 3.92 | 3500 | 5.6634 | 1.0944 | | 5.9357 | 4.48 | 4000 | 5.6120 | 1.0267 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
gabitoo1234/autonlp-mut_uchile-640218740
gabitoo1234
2022-03-14T19:26:47Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "unk", "dataset:gabitoo1234/autonlp-data-mut_uchile", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-14T18:57:53Z
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - gabitoo1234/autonlp-data-mut_uchile co2_eq_emissions: 43.078469852595994 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 640218740 - CO2 Emissions (in grams): 43.078469852595994 ## Validation Metrics - Loss: 0.8302136063575745 - Accuracy: 0.7887341933835739 - Macro F1: 0.5756730305293746 - Micro F1: 0.7887341933835739 - Weighted F1: 0.7878942570915727 - Macro Precision: 0.620883634472996 - Micro Precision: 0.7887341933835739 - Weighted Precision: 0.8009430092038783 - Macro Recall: 0.5521761315904072 - Micro Recall: 0.7887341933835739 - Weighted Recall: 0.7887341933835739 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/gabitoo1234/autonlp-mut_uchile-640218740 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("gabitoo1234/autonlp-mut_uchile-640218740", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("gabitoo1234/autonlp-mut_uchile-640218740", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
navteca/ms-marco-MiniLM-L-12-v2
navteca
2022-03-14T15:56:35Z
4
0
sentence-transformers
[ "sentence-transformers", "pytorch", "jax", "bert", "text-classification", "en", "license:mit", "region:us" ]
text-classification
2022-03-14T14:52:30Z
--- language: en license: mit pipeline_tag: text-classification tags: - sentence-transformers --- # Cross-Encoder for MS Marco The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco) ## Training Data This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. ## Usage The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name', max_length=512) scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2')]) ``` ## Performance In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset. | Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec | | ------------- |:-------------| -----| --- | | **Version 2 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000 | cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100 | cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500 | cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800 | cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960 | **Version 1 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000 | cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900 | cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680 | cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340 | **Other models** | | | | nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900 | nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340 | nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100 | Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340 | amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330 | sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720 Note: Runtime was computed on a V100 GPU.
GPL/scidocs-distilbert-tas-b-gpl-self_miner
GPL
2022-03-14T14:26:01Z
121
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-14T14:25:59Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 140000 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 140000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
GPL/webis-touche2020-distilbert-tas-b-gpl-self_miner
GPL
2022-03-14T14:25:36Z
119
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-14T14:25:34Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 140000 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 140000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
GPL/trec-news-distilbert-tas-b-gpl-self_miner
GPL
2022-03-14T14:25:19Z
128
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-14T14:25:17Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 140000 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 140000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
GPL/quora-distilbert-tas-b-gpl-self_miner
GPL
2022-03-14T14:24:46Z
123
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-14T14:24:44Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 140000 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 140000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
GPL/nfcorpus-distilbert-tas-b-gpl-self_miner
GPL
2022-03-14T14:24:13Z
127
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-14T14:24:10Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 140000 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 140000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
GPL/hotpotqa-distilbert-tas-b-gpl-self_miner
GPL
2022-03-14T14:23:55Z
124
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-14T14:23:53Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 140000 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 140000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
GPL/dbpedia-entity-distilbert-tas-b-gpl-self_miner
GPL
2022-03-14T14:23:21Z
119
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-14T14:23:19Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 140000 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 140000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
GPL/climate-fever-distilbert-tas-b-gpl-self_miner
GPL
2022-03-14T14:23:05Z
125
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-14T14:23:02Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 140000 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 140000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
GPL/arguana-distilbert-tas-b-gpl-self_miner
GPL
2022-03-14T14:22:47Z
121
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-14T14:22:45Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 140000 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 140000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
GPL/trec-covid-distilbert-tas-b-gpl-self_miner
GPL
2022-03-14T14:22:13Z
125
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-14T14:22:10Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 140000 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 140000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
GPL/robust04-distilbert-tas-b-gpl-self_miner
GPL
2022-03-14T14:18:37Z
128
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-14T14:18:35Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 140000 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 140000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
GPL/bioasq-1m-distilbert-tas-b-gpl-self_miner
GPL
2022-03-14T14:17:47Z
128
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-14T14:17:45Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 140000 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 140000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
GPL/scifact-distilbert-tas-b-gpl-self_miner
GPL
2022-03-14T14:17:30Z
120
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-14T14:16:53Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 140000 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 140000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
mt-empty/english-assyrian
mt-empty
2022-03-14T11:01:52Z
9
0
transformers
[ "transformers", "pytorch", "marian", "text2text-generation", "translation", "en", "as", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-03T14:05:11Z
--- language: - en - as tags: - translation license: apache-2.0 metrics: - sacrebleu --- https://github.com/mt-empty/assyrian-translation-model This is an English to Assyrian/Eastern Syriac machine translation model, it uses [English to Arabic](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) model as the base model. Although the project aim is to Build a English to Assyrian - the ones that fall under [Northeastern Neo-Aramaic](https://en.wikipedia.org/wiki/Northeastern_Neo-Aramaic) - the current model mostly provides translation for Classical Syriac. This model is a good initial step, but I hope future work will make it more inline with Assyrian dialects.
lijingxin/distilbert-base-uncased-distilled-clinc
lijingxin
2022-03-14T10:42:34Z
5
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-14T10:33:00Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-distilled-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos args: plus metrics: - name: Accuracy type: accuracy value: 0.9470967741935484 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.2782 - Accuracy: 0.9471 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.3365 | 1.0 | 318 | 1.6602 | 0.7361 | | 1.2799 | 2.0 | 636 | 0.8378 | 0.8548 | | 0.6739 | 3.0 | 954 | 0.4872 | 0.9132 | | 0.4143 | 4.0 | 1272 | 0.3640 | 0.9352 | | 0.3051 | 5.0 | 1590 | 0.3168 | 0.9406 | | 0.2585 | 6.0 | 1908 | 0.2970 | 0.9442 | | 0.235 | 7.0 | 2226 | 0.2876 | 0.9458 | | 0.2236 | 8.0 | 2544 | 0.2824 | 0.9458 | | 0.2168 | 9.0 | 2862 | 0.2794 | 0.9468 | | 0.2138 | 10.0 | 3180 | 0.2782 | 0.9471 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.2 - Datasets 1.16.1 - Tokenizers 0.10.3
Kalaoke/embeddings_dense_model
Kalaoke
2022-03-14T09:54:04Z
119
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-14T09:53:55Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # Kalaoke/embeddings_dense_model This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 50 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('Kalaoke/embeddings_dense_model') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Kalaoke/embeddings_dense_model) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1050 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 3, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 315, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Asym( (topic-0): Dense({'in_features': 768, 'out_features': 50, 'bias': False, 'activation_function': 'torch.nn.modules.activation.Tanh'}) (title-0): Dense({'in_features': 768, 'out_features': 50, 'bias': False, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
yugasa/Chatbots-for-the-Automotive-Industry
yugasa
2022-03-14T09:19:33Z
0
3
null
[ "region:us" ]
null
2022-03-14T09:18:00Z
The growth of digitalization is reshaping businesses, industries, and individuals from all walks of life. It is the age of conversational commerce, and Chatbot is paired with many O.T.T. apps in the automobile sector. And Chatbots are rapidly showing to be a holistic answer for company communication procedures. A new poll reveals that 90 percent of customers currently choose instant messaging to revitalize contact with a firm; however, just 63 percent of consumers favor messaging above any other communication channel. Today, modern life is watching customers, particularly millennials, actively participate in messaging and chat programs. They are actively engaged in manifesting a purchase, research, and engagement on a real-time basis, boosting business and groups. With the breakthrough in Artificial Intelligence, today’s platforms give a real-time experience while connecting with the chosen companies. The automobile sector is where clients demand individualized help while economizing manufacturers and vehicle dealers. But before any additional information is sought, here is quick data about what chatbots are. Read More: https://helloyubo.com/chatbot/chatbots-for-the-automotive-industry/
lijingxin/distilbert-base-uncased-finetuned-clinc
lijingxin
2022-03-14T09:09:37Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-14T09:05:40Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos args: plus metrics: - name: Accuracy type: accuracy value: 0.9161290322580645 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7755 - Accuracy: 0.9161 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.2992 | 1.0 | 318 | 3.2969 | 0.7339 | | 2.6329 | 2.0 | 636 | 1.8817 | 0.8235 | | 1.5442 | 3.0 | 954 | 1.1561 | 0.8939 | | 1.0132 | 4.0 | 1272 | 0.8595 | 0.9103 | | 0.7953 | 5.0 | 1590 | 0.7755 | 0.9161 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.2 - Datasets 1.16.1 - Tokenizers 0.10.3
holtin/distilbert-base-uncased-holtin-finetuned-squad
holtin
2022-03-14T08:09:33Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad_v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-14T07:57:59Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: distilbert-base-uncased-holtin-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-holtin-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 3.8541 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 84 | 4.4978 | | No log | 2.0 | 168 | 3.9588 | | No log | 3.0 | 252 | 3.8541 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
ComCom/skt_kogpt2-base-v2
ComCom
2022-03-14T07:37:27Z
5
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "ko", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-14T06:28:29Z
--- language: ko tags: - gpt2 license: cc-by-nc-sa-4.0 --- - This model forked from [skt/kogpt2-base-v2](https://huggingface.co/skt/kogpt2-base-v2). - You can use this model in [Teachable-NLP](https://ainize.ai/teachable-nlp). For more details: https://github.com/SKT-AI/KoGPT2
BAHIJA/distilbert-base-uncased-finetuned-cola
BAHIJA
2022-03-13T23:42:41Z
12
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5481326292844919 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7371 - Matthews Correlation: 0.5481 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5298 | 1.0 | 535 | 0.5333 | 0.4142 | | 0.3619 | 2.0 | 1070 | 0.5174 | 0.5019 | | 0.2449 | 3.0 | 1605 | 0.6394 | 0.4921 | | 0.1856 | 4.0 | 2140 | 0.7371 | 0.5481 | | 0.133 | 5.0 | 2675 | 0.8600 | 0.5327 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
huggingtweets/ayurastro
huggingtweets
2022-03-13T23:27:16Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-13T23:26:25Z
--- language: en thumbnail: http://www.huggingtweets.com/ayurastro/1647214031676/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/493786234221641730/OFQm2K8M_400x400.jpeg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">AyurAstro®</div> <div style="text-align: center; font-size: 14px;">@ayurastro</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from AyurAstro®. | Data | AyurAstro® | | --- | --- | | Tweets downloaded | 1437 | | Retweets | 112 | | Short tweets | 65 | | Tweets kept | 1260 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/36zw53cv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ayurastro's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/nhbmyyli) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/nhbmyyli/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/ayurastro') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Sivakumar/distilbert-base-uncased-finetuned-squad
Sivakumar
2022-03-13T21:52:35Z
11
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad_v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-13T17:08:45Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.4101 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2109 | 1.0 | 8235 | 1.2303 | | 0.9385 | 2.0 | 16470 | 1.2412 | | 0.7448 | 3.0 | 24705 | 1.4101 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
nguyenvulebinh/vi-mrc-large
nguyenvulebinh
2022-03-13T20:53:44Z
394
5
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "vi", "vn", "en", "dataset:squad", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: - vi - vn - en tags: - question-answering - pytorch datasets: - squad license: cc-by-nc-4.0 pipeline_tag: question-answering metrics: - squad widget: - text: "Bình là chuyên gia về gì ?" context: "Bình Nguyễn là một người đam mê với lĩnh vực xử lý ngôn ngữ tự nhiên . Anh nhận chứng chỉ Google Developer Expert năm 2020" - text: "Bình được công nhận với danh hiệu gì ?" context: "Bình Nguyễn là một người đam mê với lĩnh vực xử lý ngôn ngữ tự nhiên . Anh nhận chứng chỉ Google Developer Expert năm 2020" --- ## Model Description - Language model: [XLM-RoBERTa](https://huggingface.co/transformers/model_doc/xlmroberta.html) - Fine-tune: [MRCQuestionAnswering](https://github.com/nguyenvulebinh/extractive-qa-mrc) - Language: Vietnamese, Englsih - Downstream-task: Extractive QA - Dataset (combine English and Vietnamese): - [Squad 2.0](https://rajpurkar.github.io/SQuAD-explorer/) - [mailong25](https://github.com/mailong25/bert-vietnamese-question-answering/tree/master/dataset) - [VLSP MRC 2021](https://vlsp.org.vn/vlsp2021/eval/mrc) - [MultiLingual Question Answering](https://github.com/facebookresearch/MLQA) This model is intended to be used for QA in the Vietnamese language so the valid set is Vietnamese only (but English works fine). The evaluation result below uses the VLSP MRC 2021 test set. This experiment achieves TOP 1 on the leaderboard. | Model | EM | F1 | | ------------- | ------------- | ------------- | | [large](https://huggingface.co/nguyenvulebinh/vi-mrc-large) public_test_set | 85.847 | 83.826 | | [large](https://huggingface.co/nguyenvulebinh/vi-mrc-large) private_test_set | 82.072 | 78.071 | Public leaderboard | Private leaderboard :-------------------------:|:-------------------------: ![](https://i.ibb.co/tJX6V6T/public-leaderboard.jpg) | ![](https://i.ibb.co/nmsX2pG/private-leaderboard.jpg) [MRCQuestionAnswering](https://github.com/nguyenvulebinh/extractive-qa-mrc) using [XLM-RoBERTa](https://huggingface.co/transformers/model_doc/xlmroberta.html) as a pre-trained language model. By default, XLM-RoBERTa will split word in to sub-words. But in my implementation, I re-combine sub-words representation (after encoded by BERT layer) into word representation using sum strategy. ## Using pre-trained model [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1Yqgdfaca7L94OyQVnq5iQq8wRTFvVZjv?usp=sharing) - Hugging Face pipeline style (**NOT using sum features strategy**). ```python from transformers import pipeline # model_checkpoint = "nguyenvulebinh/vi-mrc-large" model_checkpoint = "nguyenvulebinh/vi-mrc-base" nlp = pipeline('question-answering', model=model_checkpoint, tokenizer=model_checkpoint) QA_input = { 'question': "Bình là chuyên gia về gì ?", 'context': "Bình Nguyễn là một người đam mê với lĩnh vực xử lý ngôn ngữ tự nhiên . Anh nhận chứng chỉ Google Developer Expert năm 2020" } res = nlp(QA_input) print('pipeline: {}'.format(res)) #{'score': 0.5782045125961304, 'start': 45, 'end': 68, 'answer': 'xử lý ngôn ngữ tự nhiên'} ``` - More accurate infer process ([**Using sum features strategy**](https://github.com/nguyenvulebinh/extractive-qa-mrc)) ```python from infer import tokenize_function, data_collator, extract_answer from model.mrc_model import MRCQuestionAnswering from transformers import AutoTokenizer model_checkpoint = "nguyenvulebinh/vi-mrc-large" #model_checkpoint = "nguyenvulebinh/vi-mrc-base" tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) model = MRCQuestionAnswering.from_pretrained(model_checkpoint) QA_input = { 'question': "Bình được công nhận với danh hiệu gì ?", 'context': "Bình Nguyễn là một người đam mê với lĩnh vực xử lý ngôn ngữ tự nhiên . Anh nhận chứng chỉ Google Developer Expert năm 2020" } inputs = [tokenize_function(*QA_input)] inputs_ids = data_collator(inputs) outputs = model(**inputs_ids) answer = extract_answer(inputs, outputs, tokenizer) print(answer) # answer: Google Developer Expert. Score start: 0.9926977753639221, Score end: 0.9909810423851013 ``` ## About *Built by Binh Nguyen* [![Follow](https://img.shields.io/twitter/follow/nguyenvulebinh?style=social)](https://twitter.com/intent/follow?screen_name=nguyenvulebinh) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/nguyenvulebinh/extractive-qa-mrc?style=social)](https://github.com/nguyenvulebinh/extractive-qa-mrc)
leonadase/distilbert-base-uncased-finetuned-sem
leonadase
2022-03-13T19:41:34Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:sem_eval2010_task8", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-13T13:14:17Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - sem_eval2010_task8 metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-sem results: - task: name: Text Classification type: text-classification dataset: name: sem_eval2010_task8 type: sem_eval2010_task8 args: default metrics: - name: Accuracy type: accuracy value: 0.8314317261685683 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-sem This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the sem_eval2010_task8 dataset. It achieves the following results on the evaluation set: - Loss: 0.6704 - Accuracy: 0.8314 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9556 | 1.0 | 800 | 0.7859 | 0.7814 | | 0.6136 | 2.0 | 1600 | 0.6069 | 0.8193 | | 0.4314 | 3.0 | 2400 | 0.6179 | 0.8211 | | 0.2315 | 4.0 | 3200 | 0.6617 | 0.8281 | | 0.1655 | 5.0 | 4000 | 0.6704 | 0.8314 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
cammy/bart-large-cnn-10k-lit-evalMA-NOpad
cammy
2022-03-13T18:11:42Z
3
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-13T11:12:39Z
--- license: mit tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-large-cnn-10k-lit-evalMA-NOpad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-10k-lit-evalMA-NOpad This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9464 - Rouge1: 28.6721 - Rouge2: 13.8303 - Rougel: 22.458 - Rougelsum: 25.668 - Gen Len: 66.893 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.535 | 1.0 | 10000 | 1.7501 | 28.519 | 13.967 | 22.4854 | 25.4511 | 66.555 | | 0.8754 | 2.0 | 20000 | 1.9464 | 28.6721 | 13.8303 | 22.458 | 25.668 | 66.893 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2 - Datasets 1.18.3 - Tokenizers 0.11.0
Taekyoon/komrc_train
Taekyoon
2022-03-13T15:11:14Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:korquad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-13T12:22:58Z
--- tags: - generated_from_trainer datasets: - korquad model-index: - name: komrc_train results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # komrc_train This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the korquad dataset. It achieves the following results on the evaluation set: - Loss: 0.6544 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 1234 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.8187 | 0.31 | 2000 | 0.7377 | | 0.6947 | 0.63 | 4000 | 0.6934 | | 0.6352 | 0.94 | 6000 | 0.6544 | | 0.3869 | 1.25 | 8000 | 0.7633 | | 0.3812 | 1.56 | 10000 | 0.7047 | | 0.3579 | 1.88 | 12000 | 0.7097 | | 0.2053 | 2.19 | 14000 | 0.8511 | | 0.2173 | 2.5 | 16000 | 0.8457 | | 0.2094 | 2.82 | 18000 | 0.8433 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.10.3
Devendr/wav2vec2-large-xls-r-300m-hindi
Devendr
2022-03-13T14:44:09Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-13T14:01:10Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-hindi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-hindi This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
cammy/bart-large-cnn-weaksup-10k-NOpad-early
cammy
2022-03-13T08:16:48Z
3
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-13T05:40:28Z
--- license: mit tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-large-cnn-weaksup-10k-NOpad-early results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-weaksup-10k-NOpad-early This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.7883 - Rouge1: 26.9755 - Rouge2: 12.4975 - Rougel: 21.0743 - Rougelsum: 23.9303 - Gen Len: 69.549 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.4657 | 1.0 | 10000 | 1.7295 | 27.973 | 13.2818 | 21.8493 | 25.0101 | 67.831 | | 0.8522 | 2.0 | 20000 | 1.7883 | 26.9755 | 12.4975 | 21.0743 | 23.9303 | 69.549 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2 - Datasets 1.18.3 - Tokenizers 0.11.0
DrishtiSharma/autonlp-Text-Classification-Catalonia-Independence-AutoNLP-633018323
DrishtiSharma
2022-03-13T07:31:45Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "en", "dataset:DrishtiSharma/autonlp-data-Text-Classification-Catalonia-Independence-AutoNLP", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-13T07:28:50Z
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - DrishtiSharma/autonlp-data-Text-Classification-Catalonia-Independence-AutoNLP co2_eq_emissions: 3.622203603306694 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 633018323 - CO2 Emissions (in grams): 3.622203603306694 ## Validation Metrics - Loss: 0.681106686592102 - Accuracy: 0.709136109384711 - Macro F1: 0.6987186860138147 - Micro F1: 0.709136109384711 - Weighted F1: 0.7059639788836748 - Macro Precision: 0.7174345617951404 - Micro Precision: 0.709136109384711 - Weighted Precision: 0.712710833401347 - Macro Recall: 0.6912117894374218 - Micro Recall: 0.709136109384711 - Weighted Recall: 0.709136109384711 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/DrishtiSharma/autonlp-Text-Classification-Catalonia-Independence-AutoNLP-633018323 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("DrishtiSharma/autonlp-Text-Classification-Catalonia-Independence-AutoNLP-633018323", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("DrishtiSharma/autonlp-Text-Classification-Catalonia-Independence-AutoNLP-633018323", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
Yokohide031/rust_cl-tohoku_bert-large-japanese
Yokohide031
2022-03-13T05:53:03Z
5
1
transformers
[ "transformers", "rust", "bert", "fill-mask", "ja", "dataset:wikipedia", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-13T02:12:20Z
--- language: ja license: cc-by-sa-4.0 datasets: - wikipedia widget: - text: Rustで[MASK]を使うことができます。。 --- # What is this model? - 東北大学のBERT large JapaneseをRustで使える様に変換 - [cl-tohoku/bert-large-japanese](https://huggingface.co/cl-tohoku/bert-large-japanese) # How to Try ### 1. Clone ``` git clone https://huggingface.co/Yokohide031/rust_cl-tohoku_bert-large-japanese ``` ### 2. Create Project ``` cargo new <projectName> ``` ### 3. Edit main.rs ``` extern crate anyhow; use rust_bert::bert::{BertConfig, BertForMaskedLM}; use rust_bert::Config; use rust_tokenizers::tokenizer::{BertTokenizer, MultiThreadedTokenizer, TruncationStrategy}; use rust_tokenizers::vocab::Vocab; use tch::{nn, no_grad, Device, Tensor}; use std::path::PathBuf; fn get_path(item: String) -> PathBuf { let mut resource_dir = PathBuf::from("path/to/rust_cl-tohoku_bert-large-japanese/"); resource_dir.push(&item); println!("{:?}", resource_dir); return resource_dir; } fn input(display: String) -> String { let mut text = String::new(); println!("{}", display); std::io::stdin().read_line(&mut text).unwrap(); return text.trim().to_string(); } fn main() -> anyhow::Result<()> { // Resources paths let model_path: PathBuf = get_path(String::from("rust_model.ot")); let vocab_path: PathBuf = get_path(String::from("vocab.txt")); let config_path: PathBuf = get_path(String::from("config.json")); // Set-up masked LM model let device = Device::Cpu; let mut vs = nn::VarStore::new(device); let config = BertConfig::from_file(config_path); let bert_model = BertForMaskedLM::new(&vs.root(), &config); vs.load(model_path)?; // Define input let inp = input(String::from("Input: ")); let inp = inp.replace("*", "[MASK]"); let input = [inp]; let tokenizer: BertTokenizer = BertTokenizer::from_file(vocab_path.to_str().unwrap(), false, false).unwrap(); let owakatied = &tokenizer.tokenize_list(&input); let tokenized_input = tokenizer.encode_list(&input, 128, &TruncationStrategy::LongestFirst, 0); let mut mask_index: usize = 0; for (i, m) in owakatied[0].iter().enumerate() { if m == "[MASK]" { mask_index = i+1; break; } } let max_len = tokenized_input .iter() .map(|input| input.token_ids.len()) .max() .unwrap(); let tokenized_input = tokenized_input .iter() .map(|input| input.token_ids.clone()) .map(|mut input| { input.extend(vec![0; max_len - input.len()]); input }) .map(|input| Tensor::of_slice(&(input))) .collect::<Vec<_>>(); let input_tensor = Tensor::stack(tokenized_input.as_slice(), 0).to(device); // Forward pass let model_output = no_grad(|| { bert_model.forward_t( Some(&input_tensor), None, None, None, None, None, None, false, ) }); println!("MASK: {}", mask_index); // Print masked tokens let index_1 = model_output .prediction_scores .get(0) .get(mask_index as i64) .argmax(0, false); let word = tokenizer.vocab().id_to_token(&index_1.int64_value(&[])); println!("{}", word); Ok(()) } ``` ※ 上のコードでは、[MASK]の代わりに "*" を使うことになってます。 ## Licenses The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
cammy/bart-large-cnn-weaksup-1000-NOpad-early
cammy
2022-03-13T05:51:27Z
3
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-13T05:36:31Z
--- license: mit tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-large-cnn-weaksup-1000-NOpad-early results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-weaksup-1000-NOpad-early This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9082 - Rouge1: 26.9663 - Rouge2: 11.3027 - Rougel: 20.7327 - Rougelsum: 23.5965 - Gen Len: 67.19 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.4775 | 1.0 | 1000 | 1.6796 | 27.208 | 12.01 | 20.8401 | 24.1333 | 66.06 | | 0.6972 | 2.0 | 2000 | 1.9082 | 26.9663 | 11.3027 | 20.7327 | 23.5965 | 67.19 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2 - Datasets 1.18.3 - Tokenizers 0.11.0
richielo/small-e-czech-finetuned-ner-wikiann
richielo
2022-03-12T20:18:42Z
12,031
2
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "token-classification", "generated_from_trainer", "dataset:wikiann", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-12T17:57:32Z
--- license: cc-by-4.0 tags: - generated_from_trainer datasets: - wikiann metrics: - precision - recall - f1 - accuracy model-index: - name: small-e-czech-finetuned-ner-wikiann results: - task: name: Token Classification type: token-classification dataset: name: wikiann type: wikiann args: cs metrics: - name: Precision type: precision value: 0.8713322894683097 - name: Recall type: recall value: 0.8970423324922905 - name: F1 type: f1 value: 0.8840004144075699 - name: Accuracy type: accuracy value: 0.9557089381093997 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-e-czech-finetuned-ner-wikiann This model is a fine-tuned version of [Seznam/small-e-czech](https://huggingface.co/Seznam/small-e-czech) on the wikiann dataset. It achieves the following results on the evaluation set: - Loss: 0.2547 - Precision: 0.8713 - Recall: 0.8970 - F1: 0.8840 - Accuracy: 0.9557 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2924 | 1.0 | 2500 | 0.2449 | 0.7686 | 0.8088 | 0.7882 | 0.9320 | | 0.2042 | 2.0 | 5000 | 0.2137 | 0.8050 | 0.8398 | 0.8220 | 0.9400 | | 0.1699 | 3.0 | 7500 | 0.1912 | 0.8236 | 0.8593 | 0.8411 | 0.9466 | | 0.1419 | 4.0 | 10000 | 0.1931 | 0.8349 | 0.8671 | 0.8507 | 0.9488 | | 0.1316 | 5.0 | 12500 | 0.1892 | 0.8470 | 0.8776 | 0.8620 | 0.9519 | | 0.1042 | 6.0 | 15000 | 0.2058 | 0.8433 | 0.8811 | 0.8618 | 0.9508 | | 0.0884 | 7.0 | 17500 | 0.2020 | 0.8602 | 0.8849 | 0.8724 | 0.9531 | | 0.0902 | 8.0 | 20000 | 0.2118 | 0.8551 | 0.8837 | 0.8692 | 0.9528 | | 0.0669 | 9.0 | 22500 | 0.2171 | 0.8634 | 0.8906 | 0.8768 | 0.9550 | | 0.0529 | 10.0 | 25000 | 0.2228 | 0.8638 | 0.8912 | 0.8773 | 0.9545 | | 0.0613 | 11.0 | 27500 | 0.2293 | 0.8626 | 0.8898 | 0.8760 | 0.9544 | | 0.0549 | 12.0 | 30000 | 0.2276 | 0.8694 | 0.8958 | 0.8824 | 0.9554 | | 0.0516 | 13.0 | 32500 | 0.2384 | 0.8717 | 0.8940 | 0.8827 | 0.9552 | | 0.0412 | 14.0 | 35000 | 0.2443 | 0.8701 | 0.8931 | 0.8815 | 0.9554 | | 0.0345 | 15.0 | 37500 | 0.2464 | 0.8723 | 0.8958 | 0.8839 | 0.9557 | | 0.0412 | 16.0 | 40000 | 0.2477 | 0.8705 | 0.8948 | 0.8825 | 0.9552 | | 0.0363 | 17.0 | 42500 | 0.2525 | 0.8742 | 0.8973 | 0.8856 | 0.9559 | | 0.0341 | 18.0 | 45000 | 0.2529 | 0.8727 | 0.8962 | 0.8843 | 0.9561 | | 0.0194 | 19.0 | 47500 | 0.2533 | 0.8699 | 0.8966 | 0.8830 | 0.9557 | | 0.0247 | 20.0 | 50000 | 0.2547 | 0.8713 | 0.8970 | 0.8840 | 0.9557 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
test1345/autonlp-savesome-631818261
test1345
2022-03-12T19:00:24Z
6
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "autonlp", "unk", "dataset:test1345/autonlp-data-savesome", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-12T18:55:14Z
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - test1345/autonlp-data-savesome co2_eq_emissions: 5.714250590300453 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 631818261 - CO2 Emissions (in grams): 5.714250590300453 ## Validation Metrics - Loss: 0.44651690125465393 - Accuracy: 0.8792873051224944 - Macro F1: 0.839261602941426 - Micro F1: 0.8792873051224943 - Weighted F1: 0.8790427387522044 - Macro Precision: 0.8407634723656228 - Micro Precision: 0.8792873051224944 - Weighted Precision: 0.8801219917819031 - Macro Recall: 0.8400328140795883 - Micro Recall: 0.8792873051224944 - Weighted Recall: 0.8792873051224944 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/test1345/autonlp-savesome-631818261 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("test1345/autonlp-savesome-631818261", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("test1345/autonlp-savesome-631818261", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
zdepablo/xlm-roberta-base-finetuned-panx-de-fr
zdepablo
2022-03-12T18:54:00Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-12T18:44:26Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1664 - F1: 0.8556 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2846 | 1.0 | 715 | 0.1837 | 0.8247 | | 0.1446 | 2.0 | 1430 | 0.1617 | 0.8409 | | 0.0923 | 3.0 | 2145 | 0.1664 | 0.8556 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
zdepablo/xlm-roberta-base-finetuned-panx-de
zdepablo
2022-03-12T18:25:42Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-12T18:16:44Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8594910162670748 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1348 - F1: 0.8595 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2556 | 1.0 | 525 | 0.1629 | 0.8218 | | 0.1309 | 2.0 | 1050 | 0.1378 | 0.8522 | | 0.0812 | 3.0 | 1575 | 0.1348 | 0.8595 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
Babygirl/Daddy
Babygirl
2022-03-12T17:48:58Z
0
1
null
[ "license:artistic-2.0", "region:us" ]
null
2022-03-12T17:48:58Z
--- license: artistic-2.0 ---
StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT
StivenLancheros
2022-03-12T11:50:46Z
275
1
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es) on the CRAFT dataset. It achieves the following results on the evaluation set: - Loss: 0.1720 - Precision: 0.8253 - Recall: 0.8147 - F1: 0.8200 - Accuracy: 0.9660 ## Model description This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the [CRAFT](https://github.com/UCDenver-ccp/CRAFT/releases)(Colorado Richly Annotated Full Text) Corpus in English. Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical. ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1133 | 1.0 | 1360 | 0.1629 | 0.7985 | 0.7782 | 0.7882 | 0.9610 | | 0.049 | 2.0 | 2720 | 0.1530 | 0.8165 | 0.8084 | 0.8124 | 0.9651 | | 0.0306 | 3.0 | 4080 | 0.1603 | 0.8198 | 0.8075 | 0.8136 | 0.9650 | | 0.0158 | 4.0 | 5440 | 0.1720 | 0.8253 | 0.8147 | 0.8200 | 0.9660 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
sanchit-gandhi/wav2vec2-2-rnd-2-layer-bart
sanchit-gandhi
2022-03-12T03:02:56Z
15
0
transformers
[ "transformers", "pytorch", "tensorboard", "speech-encoder-decoder", "automatic-speech-recognition", "generated_from_trainer", "dataset:librispeech_asr", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-10T20:56:10Z
--- tags: - generated_from_trainer datasets: - librispeech_asr model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model was trained from scratch on the librispeech_asr dataset. It achieves the following results on the evaluation set: - Loss: 4.6263 - Wer: 0.8568 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 5.9849 | 1.68 | 1500 | 5.9623 | 1.1028 | | 5.1696 | 3.36 | 3000 | 5.5504 | 1.6345 | | 4.1412 | 5.04 | 4500 | 5.3853 | 1.3565 | | 2.7226 | 6.73 | 6000 | 5.3072 | 0.9908 | | 3.2607 | 8.41 | 7500 | 5.4121 | 1.2854 | | 2.4017 | 10.09 | 9000 | 5.1094 | 1.0303 | | 1.7361 | 11.77 | 10500 | 4.8928 | 0.9506 | | 2.0638 | 13.45 | 12000 | 4.8352 | 0.9127 | | 1.2832 | 15.13 | 13500 | 4.7271 | 0.9103 | | 1.0439 | 16.82 | 15000 | 4.5980 | 0.8720 | | 0.4112 | 18.5 | 16500 | 4.6263 | 0.8568 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
NiameyBorden/nia
NiameyBorden
2022-03-11T18:11:13Z
0
0
null
[ "license:bsd-3-clause", "region:us" ]
null
2022-03-11T18:11:13Z
--- license: bsd-3-clause ---
GroNLP/wav2vec2-dutch-large
GroNLP
2022-03-11T16:04:07Z
14
2
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "speech", "nl", "endpoints_compatible", "region:us" ]
null
2022-03-11T15:41:51Z
--- language: nl tags: - speech --- # Wav2Vec2-Dutch-Large A Dutch Wav2Vec2 model. This model is created by further pre-training the original English [`facebook/wav2vec2-large`](https://huggingface.co/facebook/wav2vec2-large) model on Dutch speech from [Het Corpus Gesproken Nederlands](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/). This model is one of two Dutch Wav2Vec2 models: - [`GroNLP/wav2vec2-dutch-base`](https://huggingface.co/GroNLP/wav2vec2-dutch-base) - [`GroNLP/wav2vec2-dutch-large`](https://huggingface.co/GroNLP/wav2vec2-dutch-large) (this model)
Azu/trocr-handwritten-math
Azu
2022-03-11T13:00:38Z
5
5
transformers
[ "transformers", "pytorch", "vision-encoder-decoder", "image-text-to-text", "endpoints_compatible", "region:us" ]
image-text-to-text
2022-03-11T12:51:19Z
This model generate the math expression LATEX sequence according to the handwritten math expression image. in CROHME 2014 test dataset CER=0.507772718700326
leftthomas/resnet50
leftthomas
2022-03-11T12:53:14Z
83
0
transformers
[ "transformers", "pytorch", "resnet", "image-classification", "custom_code", "dataset:imagenet", "arxiv:1512.03385", "license:afl-3.0", "autotrain_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- tags: - image-classification - resnet license: afl-3.0 datasets: - imagenet widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # ResNet-50 Pretrained model on [ImageNet](http://www.image-net.org/). The ResNet architecture was introduced in [this paper](https://arxiv.org/abs/1512.03385). ## Intended uses You can use the raw model to classify images along the 1,000 ImageNet labels, but you can also change its head to fine-tune it on a downstream task (another classification task with different labels, image segmentation or object detection, to name a few). ## Evaluation results This model has a top1-accuracy of 76.13% and a top-5 accuracy of 92.86% in the evaluation set of ImageNet.
pratt3000/wav2vec2-base-finetuned-ks
pratt3000
2022-03-11T12:23:41Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2022-03-10T08:59:17Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: name: wav2vec2-base-finetuned-ks --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-ks This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0029 - Accuracy: 0.9997 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0037 | 1.0 | 400 | 0.0054 | 0.9991 | | 0.0007 | 2.0 | 800 | 0.0029 | 0.9997 | | 0.0004 | 3.0 | 1200 | 0.0028 | 0.9997 | | 0.0003 | 4.0 | 1600 | 0.0029 | 0.9997 | | 0.0003 | 5.0 | 2000 | 0.0028 | 0.9997 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1 - Datasets 1.18.4 - Tokenizers 0.10.3
ratishsp/SeqPlan-MLB
ratishsp
2022-03-11T12:08:06Z
0
0
null
[ "arxiv:2202.13756", "region:us" ]
null
2022-03-11T11:54:01Z
This repo contains model for [Data-to-text Generation with Variational Sequential Planning](https://arxiv.org/abs/2202.13756) (Ratish Puduppully and Yao Fu and Mirella Lapata; In Transactions of the Association for Computational Linguistics (TACL)). This model is trained on the [MLB dataset](https://huggingface.co/datasets/GEM/mlb_data_to_text). The code is available on github [repo](https://github.com/ratishsp/data2text-seq-plan-py). ## Citation ``` @article{puduppully-2021-seq-plan, author = {Ratish Puduppully and Yao Fu and Mirella Lapata}, title = {Data-to-text Generation with Variational Sequential Planning}, journal = {Transactions of the Association for Computational Linguistics (to appear)}, url = {https://arxiv.org/abs/2202.13756}, year = {2022} } ``` ## License The model is available under the MIT License.
tiot07/wav2vec2-base-timit-demo-colab
tiot07
2022-03-11T06:09:20Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-11T04:56:00Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-large](https://huggingface.co/facebook/wav2vec2-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4612 - Wer: 0.2963 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.9218 | 4.0 | 500 | 0.6017 | 0.5820 | | 0.5407 | 8.0 | 1000 | 0.4846 | 0.4388 | | 0.2899 | 12.0 | 1500 | 0.4442 | 0.3654 | | 0.1848 | 16.0 | 2000 | 0.4693 | 0.3396 | | 0.1282 | 20.0 | 2500 | 0.4690 | 0.3215 | | 0.0936 | 24.0 | 3000 | 0.4597 | 0.3125 | | 0.0714 | 28.0 | 3500 | 0.4612 | 0.2963 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.2 - Datasets 1.18.3 - Tokenizers 0.10.3
lijingxin/xlm-roberta-base-finetuned-panx-all
lijingxin
2022-03-11T02:47:18Z
6
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-11T02:37:03Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1748 - F1: 0.8555 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3036 | 1.0 | 835 | 0.1888 | 0.8068 | | 0.1585 | 2.0 | 1670 | 0.1763 | 0.8415 | | 0.1027 | 3.0 | 2505 | 0.1748 | 0.8555 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
lijingxin/xlm-roberta-base-finetuned-panx-en
lijingxin
2022-03-11T02:25:33Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-11T02:22:57Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-en results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.en metrics: - name: F1 type: f1 value: 0.7043040804918949 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.3814 - F1: 0.7043 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1472 | 1.0 | 50 | 0.5820 | 0.4600 | | 0.5186 | 2.0 | 100 | 0.4105 | 0.6645 | | 0.3599 | 3.0 | 150 | 0.3814 | 0.7043 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
Sarahliu186/distilbert-base-uncased-finetuned-cola
Sarahliu186
2022-03-10T20:47:06Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-10T20:02:25Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.548847644400088 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7415 - Matthews Correlation: 0.5488 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5273 | 1.0 | 535 | 0.5063 | 0.4092 | | 0.3491 | 2.0 | 1070 | 0.4956 | 0.5259 | | 0.2352 | 3.0 | 1605 | 0.6045 | 0.5301 | | 0.1737 | 4.0 | 2140 | 0.7415 | 0.5488 | | 0.1264 | 5.0 | 2675 | 0.8459 | 0.5466 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
Ameer05/bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-9-epoch-tweak
Ameer05
2022-03-10T16:53:08Z
11
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "summarization", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-10T16:32:19Z
--- tags: - summarization - generated_from_trainer metrics: - rouge model-index: - name: bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-9-epoch-tweak results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-9-epoch-tweak This model is a fine-tuned version of [Ameer05/model-token-repo](https://huggingface.co/Ameer05/model-token-repo) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4511 - Rouge1: 59.76 - Rouge2: 52.1999 - Rougel: 57.3631 - Rougelsum: 59.3075 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 9 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | No log | 0.91 | 5 | 2.0185 | 52.2186 | 45.4675 | 49.3152 | 51.9415 | | No log | 1.91 | 10 | 1.6571 | 60.7728 | 52.8611 | 57.3487 | 60.1676 | | No log | 2.91 | 15 | 1.5323 | 60.5674 | 52.2246 | 57.9846 | 60.073 | | No log | 3.91 | 20 | 1.4556 | 61.2167 | 53.5087 | 58.9609 | 60.893 | | 1.566 | 4.91 | 25 | 1.4632 | 62.918 | 55.4544 | 60.7116 | 62.6614 | | 1.566 | 5.91 | 30 | 1.4360 | 60.4173 | 52.5859 | 57.8131 | 59.8864 | | 1.566 | 6.91 | 35 | 1.4361 | 61.4273 | 53.9663 | 59.4445 | 60.9672 | | 1.566 | 7.91 | 40 | 1.4477 | 60.3401 | 52.7276 | 57.7504 | 59.8209 | | 0.6928 | 8.91 | 45 | 1.4511 | 59.76 | 52.1999 | 57.3631 | 59.3075 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.9.1 - Datasets 1.18.4 - Tokenizers 0.10.3
responsibility-framing/predict-perception-bert-focus-concept
responsibility-framing
2022-03-10T16:23:46Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-10T16:21:24Z
--- license: mit tags: - generated_from_trainer model-index: - name: predict-perception-bert-focus-concept results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # predict-perception-bert-focus-concept This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8129 - Rmse: 1.0197 - Rmse Focus::a Su un concetto astratto o un'emozione: 1.0197 - Mae: 0.7494 - Mae Focus::a Su un concetto astratto o un'emozione: 0.7494 - R2: 0.1970 - R2 Focus::a Su un concetto astratto o un'emozione: 0.1970 - Cos: 0.4783 - Pair: 0.0 - Rank: 0.5 - Neighbors: 0.4667 - Rsa: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 20 - eval_batch_size: 8 - seed: 1996 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Focus::a Su un concetto astratto o un'emozione | Mae | Mae Focus::a Su un concetto astratto o un'emozione | R2 | R2 Focus::a Su un concetto astratto o un'emozione | Cos | Pair | Rank | Neighbors | Rsa | |:-------------:|:-----:|:----:|:---------------:|:------:|:---------------------------------------------------:|:------:|:--------------------------------------------------:|:-------:|:-------------------------------------------------:|:------:|:----:|:----:|:---------:|:---:| | 1.047 | 1.0 | 15 | 1.0199 | 1.1422 | 1.1422 | 0.9321 | 0.9321 | -0.0075 | -0.0075 | 0.1304 | 0.0 | 0.5 | 0.3199 | nan | | 0.9914 | 2.0 | 30 | 0.9724 | 1.1153 | 1.1153 | 0.9407 | 0.9407 | 0.0393 | 0.0393 | 0.2174 | 0.0 | 0.5 | 0.3954 | nan | | 0.9049 | 3.0 | 45 | 0.9406 | 1.0969 | 1.0969 | 0.9170 | 0.9170 | 0.0708 | 0.0708 | 0.2174 | 0.0 | 0.5 | 0.3632 | nan | | 0.8826 | 4.0 | 60 | 0.8553 | 1.0460 | 1.0460 | 0.8570 | 0.8570 | 0.1551 | 0.1551 | 0.2174 | 0.0 | 0.5 | 0.3230 | nan | | 0.7837 | 5.0 | 75 | 0.8324 | 1.0319 | 1.0319 | 0.8683 | 0.8683 | 0.1776 | 0.1776 | 0.2174 | 0.0 | 0.5 | 0.3419 | nan | | 0.7013 | 6.0 | 90 | 0.7737 | 0.9949 | 0.9949 | 0.8150 | 0.8150 | 0.2356 | 0.2356 | 0.5652 | 0.0 | 0.5 | 0.5023 | nan | | 0.6429 | 7.0 | 105 | 0.7832 | 1.0010 | 1.0010 | 0.8005 | 0.8005 | 0.2262 | 0.2262 | 0.3913 | 0.0 | 0.5 | 0.4446 | nan | | 0.5526 | 8.0 | 120 | 0.7734 | 0.9946 | 0.9946 | 0.7704 | 0.7704 | 0.2360 | 0.2360 | 0.3043 | 0.0 | 0.5 | 0.2923 | nan | | 0.5194 | 9.0 | 135 | 0.6624 | 0.9205 | 0.9205 | 0.7013 | 0.7013 | 0.3456 | 0.3456 | 0.3913 | 0.0 | 0.5 | 0.3523 | nan | | 0.4278 | 10.0 | 150 | 0.8255 | 1.0276 | 1.0276 | 0.7351 | 0.7351 | 0.1845 | 0.1845 | 0.3043 | 0.0 | 0.5 | 0.4349 | nan | | 0.3522 | 11.0 | 165 | 0.9340 | 1.0931 | 1.0931 | 0.8069 | 0.8069 | 0.0773 | 0.0773 | 0.3913 | 0.0 | 0.5 | 0.4059 | nan | | 0.314 | 12.0 | 180 | 0.7495 | 0.9792 | 0.9792 | 0.7254 | 0.7254 | 0.2596 | 0.2596 | 0.3913 | 0.0 | 0.5 | 0.4059 | nan | | 0.2665 | 13.0 | 195 | 0.8574 | 1.0473 | 1.0473 | 0.7678 | 0.7678 | 0.1530 | 0.1530 | 0.3913 | 0.0 | 0.5 | 0.4059 | nan | | 0.2348 | 14.0 | 210 | 0.7913 | 1.0061 | 1.0061 | 0.7218 | 0.7218 | 0.2183 | 0.2183 | 0.3913 | 0.0 | 0.5 | 0.4059 | nan | | 0.1859 | 15.0 | 225 | 0.8012 | 1.0124 | 1.0124 | 0.7162 | 0.7162 | 0.2085 | 0.2085 | 0.3913 | 0.0 | 0.5 | 0.4059 | nan | | 0.1373 | 16.0 | 240 | 0.8405 | 1.0369 | 1.0369 | 0.7318 | 0.7318 | 0.1697 | 0.1697 | 0.3043 | 0.0 | 0.5 | 0.3734 | nan | | 0.1245 | 17.0 | 255 | 0.8398 | 1.0365 | 1.0365 | 0.7455 | 0.7455 | 0.1703 | 0.1703 | 0.4783 | 0.0 | 0.5 | 0.4667 | nan | | 0.1148 | 18.0 | 270 | 0.7948 | 1.0083 | 1.0083 | 0.7140 | 0.7140 | 0.2148 | 0.2148 | 0.3913 | 0.0 | 0.5 | 0.4175 | nan | | 0.1187 | 19.0 | 285 | 0.8301 | 1.0305 | 1.0305 | 0.7381 | 0.7381 | 0.1799 | 0.1799 | 0.3913 | 0.0 | 0.5 | 0.4175 | nan | | 0.1236 | 20.0 | 300 | 0.8867 | 1.0650 | 1.0650 | 0.7879 | 0.7879 | 0.1240 | 0.1240 | 0.3913 | 0.0 | 0.5 | 0.4059 | nan | | 0.1101 | 21.0 | 315 | 0.8405 | 1.0369 | 1.0369 | 0.7632 | 0.7632 | 0.1696 | 0.1696 | 0.3913 | 0.0 | 0.5 | 0.4059 | nan | | 0.0902 | 22.0 | 330 | 0.7850 | 1.0021 | 1.0021 | 0.7173 | 0.7173 | 0.2245 | 0.2245 | 0.3043 | 0.0 | 0.5 | 0.3734 | nan | | 0.093 | 23.0 | 345 | 0.7386 | 0.9720 | 0.9720 | 0.6960 | 0.6960 | 0.2704 | 0.2704 | 0.3913 | 0.0 | 0.5 | 0.4175 | nan | | 0.0846 | 24.0 | 360 | 0.7748 | 0.9956 | 0.9956 | 0.7150 | 0.7150 | 0.2345 | 0.2345 | 0.3913 | 0.0 | 0.5 | 0.4175 | nan | | 0.0826 | 25.0 | 375 | 0.7951 | 1.0085 | 1.0085 | 0.7230 | 0.7230 | 0.2145 | 0.2145 | 0.3913 | 0.0 | 0.5 | 0.4175 | nan | | 0.0749 | 26.0 | 390 | 0.8470 | 1.0409 | 1.0409 | 0.7621 | 0.7621 | 0.1633 | 0.1633 | 0.4783 | 0.0 | 0.5 | 0.4667 | nan | | 0.069 | 27.0 | 405 | 0.7968 | 1.0096 | 1.0096 | 0.7275 | 0.7275 | 0.2129 | 0.2129 | 0.3913 | 0.0 | 0.5 | 0.4175 | nan | | 0.0775 | 28.0 | 420 | 0.8298 | 1.0303 | 1.0303 | 0.7589 | 0.7589 | 0.1802 | 0.1802 | 0.4783 | 0.0 | 0.5 | 0.4667 | nan | | 0.0783 | 29.0 | 435 | 0.8113 | 1.0188 | 1.0188 | 0.7469 | 0.7469 | 0.1985 | 0.1985 | 0.4783 | 0.0 | 0.5 | 0.4667 | nan | | 0.0773 | 30.0 | 450 | 0.8129 | 1.0197 | 1.0197 | 0.7494 | 0.7494 | 0.1970 | 0.1970 | 0.4783 | 0.0 | 0.5 | 0.4667 | nan | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
responsibility-framing/predict-perception-bert-cause-concept
responsibility-framing
2022-03-10T16:08:27Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-10T16:04:43Z
--- license: mit tags: - generated_from_trainer model-index: - name: predict-perception-bert-cause-concept results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # predict-perception-bert-cause-concept This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4044 - Rmse: 0.6076 - Rmse Cause::a Causata da un concetto astratto (es. gelosia): 0.6076 - Mae: 0.4548 - Mae Cause::a Causata da un concetto astratto (es. gelosia): 0.4548 - R2: 0.5463 - R2 Cause::a Causata da un concetto astratto (es. gelosia): 0.5463 - Cos: 0.2174 - Pair: 0.0 - Rank: 0.5 - Neighbors: 0.3931 - Rsa: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 20 - eval_batch_size: 8 - seed: 1996 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Cause::a Causata da un concetto astratto (es. gelosia) | Mae | Mae Cause::a Causata da un concetto astratto (es. gelosia) | R2 | R2 Cause::a Causata da un concetto astratto (es. gelosia) | Cos | Pair | Rank | Neighbors | Rsa | |:-------------:|:-----:|:----:|:---------------:|:------:|:-----------------------------------------------------------:|:------:|:----------------------------------------------------------:|:-------:|:---------------------------------------------------------:|:------:|:----:|:----:|:---------:|:---:| | 1.08 | 1.0 | 15 | 0.9520 | 0.9323 | 0.9323 | 0.6560 | 0.6560 | -0.0680 | -0.0680 | 0.0435 | 0.0 | 0.5 | 0.3188 | nan | | 0.9974 | 2.0 | 30 | 0.8621 | 0.8872 | 0.8872 | 0.5962 | 0.5962 | 0.0328 | 0.0328 | 0.1304 | 0.0 | 0.5 | 0.4066 | nan | | 0.9337 | 3.0 | 45 | 0.9223 | 0.9176 | 0.9176 | 0.6608 | 0.6608 | -0.0347 | -0.0347 | 0.2174 | 0.0 | 0.5 | 0.3632 | nan | | 0.966 | 4.0 | 60 | 0.8273 | 0.8691 | 0.8691 | 0.5874 | 0.5874 | 0.0719 | 0.0719 | 0.2174 | 0.0 | 0.5 | 0.3754 | nan | | 0.8683 | 5.0 | 75 | 0.8741 | 0.8933 | 0.8933 | 0.6136 | 0.6136 | 0.0193 | 0.0193 | 0.2174 | 0.0 | 0.5 | 0.3529 | nan | | 0.8522 | 6.0 | 90 | 0.7781 | 0.8428 | 0.8428 | 0.5732 | 0.5732 | 0.1271 | 0.1271 | 0.2174 | 0.0 | 0.5 | 0.4152 | nan | | 0.7968 | 7.0 | 105 | 0.7257 | 0.8139 | 0.8139 | 0.5519 | 0.5519 | 0.1859 | 0.1859 | 0.2174 | 0.0 | 0.5 | 0.4152 | nan | | 0.7166 | 8.0 | 120 | 0.7122 | 0.8064 | 0.8064 | 0.5792 | 0.5792 | 0.2010 | 0.2010 | 0.1304 | 0.0 | 0.5 | 0.3955 | nan | | 0.6246 | 9.0 | 135 | 0.6771 | 0.7862 | 0.7862 | 0.5701 | 0.5701 | 0.2403 | 0.2403 | 0.0435 | 0.0 | 0.5 | 0.3955 | nan | | 0.5205 | 10.0 | 150 | 0.6704 | 0.7823 | 0.7823 | 0.5735 | 0.5735 | 0.2479 | 0.2479 | 0.3913 | 0.0 | 0.5 | 0.4847 | nan | | 0.4182 | 11.0 | 165 | 0.6852 | 0.7909 | 0.7909 | 0.5987 | 0.5987 | 0.2313 | 0.2313 | 0.3913 | 0.0 | 0.5 | 0.4847 | nan | | 0.3984 | 12.0 | 180 | 0.6106 | 0.7466 | 0.7466 | 0.5696 | 0.5696 | 0.3150 | 0.3150 | 0.0435 | 0.0 | 0.5 | 0.2935 | nan | | 0.3138 | 13.0 | 195 | 0.5867 | 0.7318 | 0.7318 | 0.5209 | 0.5209 | 0.3418 | 0.3418 | 0.2174 | 0.0 | 0.5 | 0.3119 | nan | | 0.2323 | 14.0 | 210 | 0.5120 | 0.6837 | 0.6837 | 0.5007 | 0.5007 | 0.4256 | 0.4256 | 0.3043 | 0.0 | 0.5 | 0.3849 | nan | | 0.2149 | 15.0 | 225 | 0.4789 | 0.6612 | 0.6612 | 0.4883 | 0.4883 | 0.4627 | 0.4627 | 0.3043 | 0.0 | 0.5 | 0.3849 | nan | | 0.1753 | 16.0 | 240 | 0.4526 | 0.6428 | 0.6428 | 0.4775 | 0.4775 | 0.4922 | 0.4922 | 0.3043 | 0.0 | 0.5 | 0.3849 | nan | | 0.1478 | 17.0 | 255 | 0.4383 | 0.6325 | 0.6325 | 0.4616 | 0.4616 | 0.5083 | 0.5083 | 0.2174 | 0.0 | 0.5 | 0.3931 | nan | | 0.1289 | 18.0 | 270 | 0.4141 | 0.6148 | 0.6148 | 0.4478 | 0.4478 | 0.5355 | 0.5355 | 0.3043 | 0.0 | 0.5 | 0.3849 | nan | | 0.1035 | 19.0 | 285 | 0.3952 | 0.6007 | 0.6007 | 0.4407 | 0.4407 | 0.5566 | 0.5566 | 0.3043 | 0.0 | 0.5 | 0.3849 | nan | | 0.1087 | 20.0 | 300 | 0.4217 | 0.6205 | 0.6205 | 0.4505 | 0.4505 | 0.5269 | 0.5269 | 0.2174 | 0.0 | 0.5 | 0.3931 | nan | | 0.1005 | 21.0 | 315 | 0.4065 | 0.6091 | 0.6091 | 0.4508 | 0.4508 | 0.5440 | 0.5440 | 0.2174 | 0.0 | 0.5 | 0.3931 | nan | | 0.0868 | 22.0 | 330 | 0.3937 | 0.5995 | 0.5995 | 0.4470 | 0.4470 | 0.5584 | 0.5584 | 0.3043 | 0.0 | 0.5 | 0.3849 | nan | | 0.0808 | 23.0 | 345 | 0.4132 | 0.6142 | 0.6142 | 0.4617 | 0.4617 | 0.5364 | 0.5364 | 0.2174 | 0.0 | 0.5 | 0.3931 | nan | | 0.0737 | 24.0 | 360 | 0.4214 | 0.6203 | 0.6203 | 0.4659 | 0.4659 | 0.5272 | 0.5272 | 0.3043 | 0.0 | 0.5 | 0.4066 | nan | | 0.0711 | 25.0 | 375 | 0.3863 | 0.5939 | 0.5939 | 0.4470 | 0.4470 | 0.5666 | 0.5666 | 0.3043 | 0.0 | 0.5 | 0.3849 | nan | | 0.066 | 26.0 | 390 | 0.4353 | 0.6304 | 0.6304 | 0.4760 | 0.4760 | 0.5117 | 0.5117 | 0.2174 | 0.0 | 0.5 | 0.3931 | nan | | 0.0681 | 27.0 | 405 | 0.4078 | 0.6101 | 0.6101 | 0.4612 | 0.4612 | 0.5426 | 0.5426 | 0.2174 | 0.0 | 0.5 | 0.3931 | nan | | 0.0543 | 28.0 | 420 | 0.4118 | 0.6132 | 0.6132 | 0.4616 | 0.4616 | 0.5380 | 0.5380 | 0.2174 | 0.0 | 0.5 | 0.3931 | nan | | 0.069 | 29.0 | 435 | 0.4041 | 0.6074 | 0.6074 | 0.4551 | 0.4551 | 0.5466 | 0.5466 | 0.2174 | 0.0 | 0.5 | 0.3931 | nan | | 0.0604 | 30.0 | 450 | 0.4044 | 0.6076 | 0.6076 | 0.4548 | 0.4548 | 0.5463 | 0.5463 | 0.2174 | 0.0 | 0.5 | 0.3931 | nan | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
responsibility-framing/predict-perception-bert-cause-object
responsibility-framing
2022-03-10T16:04:30Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-10T16:01:55Z
--- license: mit tags: - generated_from_trainer model-index: - name: predict-perception-bert-cause-object results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # predict-perception-bert-cause-object This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4120 - Rmse: 1.0345 - Rmse Cause::a Causata da un oggetto (es. una pistola): 1.0345 - Mae: 0.6181 - Mae Cause::a Causata da un oggetto (es. una pistola): 0.6181 - R2: 0.3837 - R2 Cause::a Causata da un oggetto (es. una pistola): 0.3837 - Cos: 0.9130 - Pair: 0.0 - Rank: 0.5 - Neighbors: 0.8986 - Rsa: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 20 - eval_batch_size: 8 - seed: 1996 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Cause::a Causata da un oggetto (es. una pistola) | Mae | Mae Cause::a Causata da un oggetto (es. una pistola) | R2 | R2 Cause::a Causata da un oggetto (es. una pistola) | Cos | Pair | Rank | Neighbors | Rsa | |:-------------:|:-----:|:----:|:---------------:|:------:|:-----------------------------------------------------:|:------:|:----------------------------------------------------:|:-------:|:---------------------------------------------------:|:------:|:----:|:----:|:---------:|:---:| | 1.0824 | 1.0 | 15 | 0.6651 | 1.3143 | 1.3143 | 1.0930 | 1.0930 | 0.0052 | 0.0052 | 0.3043 | 0.0 | 0.5 | 0.4393 | nan | | 0.9574 | 2.0 | 30 | 0.7088 | 1.3568 | 1.3568 | 1.1945 | 1.1945 | -0.0601 | -0.0601 | 0.0435 | 0.0 | 0.5 | 0.3380 | nan | | 0.8151 | 3.0 | 45 | 0.6300 | 1.2791 | 1.2791 | 1.0206 | 1.0206 | 0.0577 | 0.0577 | 0.3043 | 0.0 | 0.5 | 0.3613 | nan | | 0.6401 | 4.0 | 60 | 0.4871 | 1.1247 | 1.1247 | 0.7285 | 0.7285 | 0.2715 | 0.2715 | 0.5652 | 0.0 | 0.5 | 0.6424 | nan | | 0.448 | 5.0 | 75 | 0.5005 | 1.1401 | 1.1401 | 0.7216 | 0.7216 | 0.2514 | 0.2514 | 0.4783 | 0.0 | 0.5 | 0.6077 | nan | | 0.2893 | 6.0 | 90 | 0.4761 | 1.1119 | 1.1119 | 0.7237 | 0.7237 | 0.2879 | 0.2879 | 0.5652 | 0.0 | 0.5 | 0.6348 | nan | | 0.174 | 7.0 | 105 | 0.4771 | 1.1131 | 1.1131 | 0.6836 | 0.6836 | 0.2865 | 0.2865 | 0.6522 | 0.0 | 0.5 | 0.6785 | nan | | 0.1383 | 8.0 | 120 | 0.4313 | 1.0583 | 1.0583 | 0.6462 | 0.6462 | 0.3550 | 0.3550 | 0.8261 | 0.0 | 0.5 | 0.7586 | nan | | 0.1105 | 9.0 | 135 | 0.4660 | 1.1001 | 1.1001 | 0.6737 | 0.6737 | 0.3030 | 0.3030 | 0.8261 | 0.0 | 0.5 | 0.7586 | nan | | 0.0903 | 10.0 | 150 | 0.4866 | 1.1241 | 1.1241 | 0.7192 | 0.7192 | 0.2723 | 0.2723 | 0.7391 | 0.0 | 0.5 | 0.6833 | nan | | 0.0571 | 11.0 | 165 | 0.4361 | 1.0642 | 1.0642 | 0.6130 | 0.6130 | 0.3478 | 0.3478 | 0.8261 | 0.0 | 0.5 | 0.7586 | nan | | 0.0623 | 12.0 | 180 | 0.4578 | 1.0904 | 1.0904 | 0.6844 | 0.6844 | 0.3152 | 0.3152 | 0.6522 | 0.0 | 0.5 | 0.6785 | nan | | 0.0526 | 13.0 | 195 | 0.4605 | 1.0936 | 1.0936 | 0.6697 | 0.6697 | 0.3112 | 0.3112 | 0.6522 | 0.0 | 0.5 | 0.6785 | nan | | 0.0472 | 14.0 | 210 | 0.4440 | 1.0738 | 1.0738 | 0.6589 | 0.6589 | 0.3360 | 0.3360 | 0.7391 | 0.0 | 0.5 | 0.7327 | nan | | 0.0492 | 15.0 | 225 | 0.4593 | 1.0922 | 1.0922 | 0.6812 | 0.6812 | 0.3130 | 0.3130 | 0.7391 | 0.0 | 0.5 | 0.6833 | nan | | 0.0389 | 16.0 | 240 | 0.4195 | 1.0437 | 1.0437 | 0.6252 | 0.6252 | 0.3726 | 0.3726 | 0.8261 | 0.0 | 0.5 | 0.7586 | nan | | 0.0396 | 17.0 | 255 | 0.4087 | 1.0302 | 1.0302 | 0.6119 | 0.6119 | 0.3888 | 0.3888 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan | | 0.0328 | 18.0 | 270 | 0.4274 | 1.0535 | 1.0535 | 0.6457 | 0.6457 | 0.3608 | 0.3608 | 0.8261 | 0.0 | 0.5 | 0.7431 | nan | | 0.0345 | 19.0 | 285 | 0.4306 | 1.0574 | 1.0574 | 0.6576 | 0.6576 | 0.3560 | 0.3560 | 0.8261 | 0.0 | 0.5 | 0.7431 | nan | | 0.0328 | 20.0 | 300 | 0.4067 | 1.0277 | 1.0277 | 0.6160 | 0.6160 | 0.3918 | 0.3918 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan | | 0.0344 | 21.0 | 315 | 0.4056 | 1.0263 | 1.0263 | 0.5948 | 0.5948 | 0.3934 | 0.3934 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan | | 0.0312 | 22.0 | 330 | 0.4236 | 1.0488 | 1.0488 | 0.6277 | 0.6277 | 0.3665 | 0.3665 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan | | 0.0241 | 23.0 | 345 | 0.4272 | 1.0533 | 1.0533 | 0.6444 | 0.6444 | 0.3610 | 0.3610 | 0.8261 | 0.0 | 0.5 | 0.7431 | nan | | 0.0302 | 24.0 | 360 | 0.4046 | 1.0250 | 1.0250 | 0.6030 | 0.6030 | 0.3949 | 0.3949 | 0.8261 | 0.0 | 0.5 | 0.7586 | nan | | 0.0244 | 25.0 | 375 | 0.4194 | 1.0436 | 1.0436 | 0.6320 | 0.6320 | 0.3728 | 0.3728 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan | | 0.0259 | 26.0 | 390 | 0.4025 | 1.0224 | 1.0224 | 0.6009 | 0.6009 | 0.3980 | 0.3980 | 0.8261 | 0.0 | 0.5 | 0.7586 | nan | | 0.0265 | 27.0 | 405 | 0.4103 | 1.0323 | 1.0323 | 0.6180 | 0.6180 | 0.3863 | 0.3863 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan | | 0.0184 | 28.0 | 420 | 0.4059 | 1.0268 | 1.0268 | 0.6046 | 0.6046 | 0.3929 | 0.3929 | 0.8261 | 0.0 | 0.5 | 0.7586 | nan | | 0.0257 | 29.0 | 435 | 0.4088 | 1.0304 | 1.0304 | 0.6122 | 0.6122 | 0.3885 | 0.3885 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan | | 0.0262 | 30.0 | 450 | 0.4120 | 1.0345 | 1.0345 | 0.6181 | 0.6181 | 0.3837 | 0.3837 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
responsibility-framing/predict-perception-bert-blame-none
responsibility-framing
2022-03-10T15:59:10Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-10T15:54:27Z
--- license: mit tags: - generated_from_trainer model-index: - name: predict-perception-bert-blame-none results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # predict-perception-bert-blame-none This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8646 - Rmse: 1.1072 - Rmse Blame::a Nessuno: 1.1072 - Mae: 0.8721 - Mae Blame::a Nessuno: 0.8721 - R2: 0.3083 - R2 Blame::a Nessuno: 0.3083 - Cos: 0.5652 - Pair: 0.0 - Rank: 0.5 - Neighbors: 0.5070 - Rsa: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 20 - eval_batch_size: 8 - seed: 1996 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Blame::a Nessuno | Mae | Mae Blame::a Nessuno | R2 | R2 Blame::a Nessuno | Cos | Pair | Rank | Neighbors | Rsa | |:-------------:|:-----:|:----:|:---------------:|:------:|:---------------------:|:------:|:--------------------:|:-------:|:-------------------:|:-------:|:----:|:----:|:---------:|:---:| | 1.007 | 1.0 | 15 | 1.2585 | 1.3358 | 1.3358 | 1.1752 | 1.1752 | -0.0068 | -0.0068 | -0.0435 | 0.0 | 0.5 | 0.2970 | nan | | 0.927 | 2.0 | 30 | 1.1310 | 1.2663 | 1.2663 | 1.0633 | 1.0633 | 0.0952 | 0.0952 | 0.4783 | 0.0 | 0.5 | 0.4012 | nan | | 0.8376 | 3.0 | 45 | 1.0603 | 1.2261 | 1.2261 | 1.0574 | 1.0574 | 0.1518 | 0.1518 | 0.1304 | 0.0 | 0.5 | 0.2970 | nan | | 0.7154 | 4.0 | 60 | 0.8347 | 1.0879 | 1.0879 | 0.8854 | 0.8854 | 0.3323 | 0.3323 | 0.6522 | 0.0 | 0.5 | 0.5209 | nan | | 0.5766 | 5.0 | 75 | 0.7426 | 1.0261 | 1.0261 | 0.8340 | 0.8340 | 0.4059 | 0.4059 | 0.6522 | 0.0 | 0.5 | 0.5209 | nan | | 0.4632 | 6.0 | 90 | 0.6671 | 0.9725 | 0.9725 | 0.7932 | 0.7932 | 0.4663 | 0.4663 | 0.6522 | 0.0 | 0.5 | 0.5209 | nan | | 0.3854 | 7.0 | 105 | 0.6447 | 0.9561 | 0.9561 | 0.7424 | 0.7424 | 0.4842 | 0.4842 | 0.6522 | 0.0 | 0.5 | 0.4307 | nan | | 0.3154 | 8.0 | 120 | 0.7198 | 1.0102 | 1.0102 | 0.8113 | 0.8113 | 0.4241 | 0.4241 | 0.6522 | 0.0 | 0.5 | 0.4307 | nan | | 0.2637 | 9.0 | 135 | 0.7221 | 1.0118 | 1.0118 | 0.8319 | 0.8319 | 0.4223 | 0.4223 | 0.5652 | 0.0 | 0.5 | 0.4150 | nan | | 0.1962 | 10.0 | 150 | 0.6999 | 0.9962 | 0.9962 | 0.7945 | 0.7945 | 0.4401 | 0.4401 | 0.4783 | 0.0 | 0.5 | 0.4056 | nan | | 0.1784 | 11.0 | 165 | 0.7335 | 1.0198 | 1.0198 | 0.7969 | 0.7969 | 0.4132 | 0.4132 | 0.5652 | 0.0 | 0.5 | 0.4150 | nan | | 0.1531 | 12.0 | 180 | 0.8277 | 1.0833 | 1.0833 | 0.8839 | 0.8839 | 0.3378 | 0.3378 | 0.4783 | 0.0 | 0.5 | 0.4440 | nan | | 0.1425 | 13.0 | 195 | 0.8644 | 1.1070 | 1.1070 | 0.8726 | 0.8726 | 0.3085 | 0.3085 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan | | 0.0921 | 14.0 | 210 | 0.8874 | 1.1217 | 1.1217 | 0.9024 | 0.9024 | 0.2900 | 0.2900 | 0.4783 | 0.0 | 0.5 | 0.4440 | nan | | 0.0913 | 15.0 | 225 | 0.8663 | 1.1083 | 1.1083 | 0.8914 | 0.8914 | 0.3070 | 0.3070 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan | | 0.08 | 16.0 | 240 | 0.8678 | 1.1093 | 1.1093 | 0.8762 | 0.8762 | 0.3057 | 0.3057 | 0.6522 | 0.0 | 0.5 | 0.5931 | nan | | 0.0725 | 17.0 | 255 | 0.8497 | 1.0976 | 1.0976 | 0.8868 | 0.8868 | 0.3202 | 0.3202 | 0.4783 | 0.0 | 0.5 | 0.4440 | nan | | 0.0696 | 18.0 | 270 | 0.8533 | 1.1000 | 1.1000 | 0.8796 | 0.8796 | 0.3173 | 0.3173 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan | | 0.0632 | 19.0 | 285 | 0.8563 | 1.1018 | 1.1018 | 0.8768 | 0.8768 | 0.3150 | 0.3150 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan | | 0.0511 | 20.0 | 300 | 0.8433 | 1.0935 | 1.0935 | 0.8684 | 0.8684 | 0.3254 | 0.3254 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan | | 0.0517 | 21.0 | 315 | 0.8449 | 1.0945 | 1.0945 | 0.8758 | 0.8758 | 0.3240 | 0.3240 | 0.4783 | 0.0 | 0.5 | 0.4440 | nan | | 0.0556 | 22.0 | 330 | 0.8305 | 1.0851 | 1.0851 | 0.8469 | 0.8469 | 0.3356 | 0.3356 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan | | 0.0457 | 23.0 | 345 | 0.8369 | 1.0893 | 1.0893 | 0.8555 | 0.8555 | 0.3305 | 0.3305 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan | | 0.0496 | 24.0 | 360 | 0.8441 | 1.0940 | 1.0940 | 0.8648 | 0.8648 | 0.3247 | 0.3247 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan | | 0.0467 | 25.0 | 375 | 0.8470 | 1.0959 | 1.0959 | 0.8633 | 0.8633 | 0.3224 | 0.3224 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan | | 0.0446 | 26.0 | 390 | 0.8562 | 1.1018 | 1.1018 | 0.8708 | 0.8708 | 0.3151 | 0.3151 | 0.4783 | 0.0 | 0.5 | 0.4440 | nan | | 0.0476 | 27.0 | 405 | 0.8600 | 1.1042 | 1.1042 | 0.8714 | 0.8714 | 0.3120 | 0.3120 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan | | 0.042 | 28.0 | 420 | 0.8657 | 1.1079 | 1.1079 | 0.8763 | 0.8763 | 0.3074 | 0.3074 | 0.4783 | 0.0 | 0.5 | 0.4440 | nan | | 0.0431 | 29.0 | 435 | 0.8654 | 1.1077 | 1.1077 | 0.8734 | 0.8734 | 0.3077 | 0.3077 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan | | 0.0423 | 30.0 | 450 | 0.8646 | 1.1072 | 1.1072 | 0.8721 | 0.8721 | 0.3083 | 0.3083 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
responsibility-framing/predict-perception-bert-blame-assassin
responsibility-framing
2022-03-10T15:44:18Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-10T15:32:55Z
--- license: mit tags: - generated_from_trainer model-index: - name: predict-perception-bert-blame-assassin results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # predict-perception-bert-blame-assassin This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5128 - Rmse: 1.0287 - Rmse Blame::a L'assassino: 1.0287 - Mae: 0.8883 - Mae Blame::a L'assassino: 0.8883 - R2: 0.5883 - R2 Blame::a L'assassino: 0.5883 - Cos: 0.6522 - Pair: 0.0 - Rank: 0.5 - Neighbors: 0.5795 - Rsa: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 20 - eval_batch_size: 8 - seed: 1996 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Blame::a L'assassino | Mae | Mae Blame::a L'assassino | R2 | R2 Blame::a L'assassino | Cos | Pair | Rank | Neighbors | Rsa | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------------------------:|:------:|:------------------------:|:------:|:-----------------------:|:------:|:----:|:----:|:---------:|:---:| | 1.0184 | 1.0 | 15 | 1.2219 | 1.5879 | 1.5879 | 1.4308 | 1.4308 | 0.0191 | 0.0191 | 0.3913 | 0.0 | 0.5 | 0.3781 | nan | | 0.9214 | 2.0 | 30 | 1.0927 | 1.5017 | 1.5017 | 1.3634 | 1.3634 | 0.1227 | 0.1227 | 0.5652 | 0.0 | 0.5 | 0.4512 | nan | | 0.7809 | 3.0 | 45 | 0.8206 | 1.3013 | 1.3013 | 1.1808 | 1.1808 | 0.3412 | 0.3412 | 0.4783 | 0.0 | 0.5 | 0.3819 | nan | | 0.6593 | 4.0 | 60 | 0.5894 | 1.1029 | 1.1029 | 1.0145 | 1.0145 | 0.5268 | 0.5268 | 0.7391 | 0.0 | 0.5 | 0.6408 | nan | | 0.4672 | 5.0 | 75 | 0.4759 | 0.9910 | 0.9910 | 0.8868 | 0.8868 | 0.6180 | 0.6180 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan | | 0.3356 | 6.0 | 90 | 0.4220 | 0.9332 | 0.9332 | 0.8083 | 0.8083 | 0.6612 | 0.6612 | 0.6522 | 0.0 | 0.5 | 0.4249 | nan | | 0.2782 | 7.0 | 105 | 0.4477 | 0.9612 | 0.9612 | 0.8046 | 0.8046 | 0.6406 | 0.6406 | 0.6522 | 0.0 | 0.5 | 0.6101 | nan | | 0.2075 | 8.0 | 120 | 0.4389 | 0.9518 | 0.9518 | 0.8050 | 0.8050 | 0.6476 | 0.6476 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan | | 0.1725 | 9.0 | 135 | 0.4832 | 0.9985 | 0.9985 | 0.8356 | 0.8356 | 0.6121 | 0.6121 | 0.7391 | 0.0 | 0.5 | 0.6616 | nan | | 0.1642 | 10.0 | 150 | 0.4368 | 0.9494 | 0.9494 | 0.8060 | 0.8060 | 0.6493 | 0.6493 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan | | 0.1172 | 11.0 | 165 | 0.4538 | 0.9677 | 0.9677 | 0.8174 | 0.8174 | 0.6357 | 0.6357 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan | | 0.104 | 12.0 | 180 | 0.4672 | 0.9819 | 0.9819 | 0.8384 | 0.8384 | 0.6249 | 0.6249 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan | | 0.0822 | 13.0 | 195 | 0.4401 | 0.9530 | 0.9530 | 0.8107 | 0.8107 | 0.6467 | 0.6467 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan | | 0.0755 | 14.0 | 210 | 0.4464 | 0.9598 | 0.9598 | 0.8251 | 0.8251 | 0.6416 | 0.6416 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan | | 0.0801 | 15.0 | 225 | 0.4834 | 0.9988 | 0.9988 | 0.8604 | 0.8604 | 0.6119 | 0.6119 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan | | 0.053 | 16.0 | 240 | 0.4846 | 1.0001 | 1.0001 | 0.8651 | 0.8651 | 0.6109 | 0.6109 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan | | 0.0573 | 17.0 | 255 | 0.4970 | 1.0128 | 1.0128 | 0.8743 | 0.8743 | 0.6010 | 0.6010 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan | | 0.0571 | 18.0 | 270 | 0.4803 | 0.9956 | 0.9956 | 0.8503 | 0.8503 | 0.6144 | 0.6144 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan | | 0.0483 | 19.0 | 285 | 0.4936 | 1.0093 | 1.0093 | 0.8740 | 0.8740 | 0.6037 | 0.6037 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan | | 0.0414 | 20.0 | 300 | 0.5138 | 1.0297 | 1.0297 | 0.8943 | 0.8943 | 0.5875 | 0.5875 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan | | 0.0513 | 21.0 | 315 | 0.5240 | 1.0399 | 1.0399 | 0.9050 | 0.9050 | 0.5793 | 0.5793 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan | | 0.0499 | 22.0 | 330 | 0.5275 | 1.0434 | 1.0434 | 0.9048 | 0.9048 | 0.5765 | 0.5765 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan | | 0.0423 | 23.0 | 345 | 0.5350 | 1.0508 | 1.0508 | 0.8872 | 0.8872 | 0.5705 | 0.5705 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan | | 0.0447 | 24.0 | 360 | 0.4963 | 1.0120 | 1.0120 | 0.8754 | 0.8754 | 0.6016 | 0.6016 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan | | 0.0364 | 25.0 | 375 | 0.5009 | 1.0167 | 1.0167 | 0.8809 | 0.8809 | 0.5979 | 0.5979 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan | | 0.0412 | 26.0 | 390 | 0.5060 | 1.0219 | 1.0219 | 0.8781 | 0.8781 | 0.5938 | 0.5938 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan | | 0.0297 | 27.0 | 405 | 0.5027 | 1.0185 | 1.0185 | 0.8838 | 0.8838 | 0.5964 | 0.5964 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan | | 0.0416 | 28.0 | 420 | 0.5071 | 1.0230 | 1.0230 | 0.8867 | 0.8867 | 0.5929 | 0.5929 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan | | 0.0327 | 29.0 | 435 | 0.5124 | 1.0283 | 1.0283 | 0.8883 | 0.8883 | 0.5887 | 0.5887 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan | | 0.0383 | 30.0 | 450 | 0.5128 | 1.0287 | 1.0287 | 0.8883 | 0.8883 | 0.5883 | 0.5883 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
rocca/sims4-faces
rocca
2022-03-10T13:41:17Z
0
0
null
[ "onnx", "license:mit", "region:us" ]
null
2022-03-10T12:15:19Z
--- license: mit --- Datasets here: https://huggingface.co/datasets/rocca/sims4-faces
Kevincp560/bigbird-pegasus-large-bigpatent-finetuned-pubMed
Kevincp560
2022-03-10T13:11:37Z
4
2
transformers
[ "transformers", "pytorch", "bigbird_pegasus", "text2text-generation", "generated_from_trainer", "dataset:pub_med_summarization_dataset", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-10T10:58:00Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - pub_med_summarization_dataset metrics: - rouge model-index: - name: bigbird-pegasus-large-bigpatent-finetuned-pubMed results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: pub_med_summarization_dataset type: pub_med_summarization_dataset args: document metrics: - name: Rouge1 type: rouge value: 45.0851 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bigbird-pegasus-large-bigpatent-finetuned-pubMed This model is a fine-tuned version of [google/bigbird-pegasus-large-bigpatent](https://huggingface.co/google/bigbird-pegasus-large-bigpatent) on the pub_med_summarization_dataset dataset. It achieves the following results on the evaluation set: - Loss: 1.5403 - Rouge1: 45.0851 - Rouge2: 19.5488 - Rougel: 27.391 - Rougelsum: 41.112 - Gen Len: 231.608 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 2.1198 | 1.0 | 500 | 1.6285 | 43.0579 | 18.1792 | 26.421 | 39.0769 | 214.924 | | 1.6939 | 2.0 | 1000 | 1.5696 | 44.0679 | 18.9331 | 26.84 | 40.0684 | 222.814 | | 1.6195 | 3.0 | 1500 | 1.5506 | 44.7352 | 19.3532 | 27.2418 | 40.7454 | 229.396 | | 1.5798 | 4.0 | 2000 | 1.5403 | 45.0415 | 19.5019 | 27.2969 | 40.951 | 231.044 | | 1.5592 | 5.0 | 2500 | 1.5403 | 45.0851 | 19.5488 | 27.391 | 41.112 | 231.608 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.9.1 - Datasets 1.18.4 - Tokenizers 0.11.6
Chijioke/autonlp-mono-625317956
Chijioke
2022-03-10T12:46:27Z
3
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autonlp", "en", "dataset:Chijioke/autonlp-data-mono", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-10T12:45:12Z
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - Chijioke/autonlp-data-mono co2_eq_emissions: 1.1406456838043837 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 625317956 - CO2 Emissions (in grams): 1.1406456838043837 ## Validation Metrics - Loss: 0.513037919998169 - Accuracy: 0.8982035928143712 - Macro F1: 0.7843756230226546 - Micro F1: 0.8982035928143712 - Weighted F1: 0.8891653474608059 - Macro Precision: 0.8210878091622635 - Micro Precision: 0.8982035928143712 - Weighted Precision: 0.8888857327766032 - Macro Recall: 0.7731018645485747 - Micro Recall: 0.8982035928143712 - Weighted Recall: 0.8982035928143712 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Chijioke/autonlp-mono-625317956 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Chijioke/autonlp-mono-625317956", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Chijioke/autonlp-mono-625317956", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
gustavecortal/gpt-j-fr-covid-news
gustavecortal
2022-03-10T10:05:27Z
6
1
transformers
[ "transformers", "pytorch", "gptj", "text-generation", "causal-lm", "fr", "dataset:gustavecortal/fr_covid_news", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-09T12:18:31Z
--- language: fr license: mit tags: - causal-lm - fr datasets: - gustavecortal/fr_covid_news --- ### GPT-J COVID-19 French News with 8-bit weights This is a version of Cedille's GPT-J ([fr-boris](https://huggingface.co/gustavecortal/fr-boris-8bit)) with 6 billion parameters fine-tuned on [COVID-19 French News dataset](https://huggingface.co/datasets/gustavecortal/fr_covid_news) to generate French headlines related to COVID-19. You can generate the model in colab or equivalent desktop gpu (e.g. single 1080Ti) as the model has 8-bit weights. Inspired by [GPT-J 8bit](https://huggingface.co/hivemind/gpt-j-6B-8bit). Here's how to run it: [![colab](https://camo.githubusercontent.com/84f0493939e0c4de4e6dbe113251b4bfb5353e57134ffd9fcab6b8714514d4d1/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667)](https://colab.research.google.com/drive/1lMja-CPc0vm5_-gXNXAWU-9c0nom7vZ9) This model can be easily loaded using the `GPTJForCausalLM` functionality: ```python from transformers import GPTJForCausalLM model = GPTJForCausalLM.from_pretrained("gustavecortal/gpt-j-fr-covid-news") ``` Remember, you have to Monkey-Patch the model before loading it (see Colab above). ## One thousand AI-generated French headlines related to COVID-19 How not to be disoriented in a pandemic era when faced with an immense flow of information? [This page](https://gustavecortal.com/project/covid) features one thousand AI-generated French headlines related to COVID-19. ## fr-boris Boris is a 6B parameter autoregressive language model based on the GPT-J architecture and trained using the [mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax) codebase. Boris was trained on around 78B tokens of French text from the [C4](https://huggingface.co/datasets/c4) dataset. ## Links * [Gustave Cortal](https://twitter.com/gustavecortal)
verok/verok_private
verok
2022-03-10T09:31:56Z
0
1
null
[ "region:us" ]
null
2022-03-10T09:28:49Z
Моя модель умеет распознавать ценники и сравнивать с ценами конкурентов.
Splend1dchan/byt5small-glue-mnli
Splend1dchan
2022-03-10T08:40:27Z
5
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-07T15:57:56Z
byt5 finetuned on MNLI dataset for 3 epochs, with lr=1e-4 valid matched acc = 0.80
cammy/bart-large-cnn-100-lit-evalMA
cammy
2022-03-10T07:49:09Z
3
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-10T06:32:37Z
--- license: mit tags: - generated_from_trainer model-index: - name: bart-large-cnn-100-lit-evalMA results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-100-lit-evalMA This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 2.1514 - eval_rouge1: 27.8026 - eval_rouge2: 11.2998 - eval_rougeL: 21.4708 - eval_rougeLsum: 24.6333 - eval_gen_len: 62.5 - eval_runtime: 25.6587 - eval_samples_per_second: 0.39 - eval_steps_per_second: 0.39 - epoch: 2.0 - step: 200 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2 - Datasets 1.18.3 - Tokenizers 0.11.0
momo/MOTOD_pre_trained
momo
2022-03-10T07:29:29Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-09T02:23:19Z
--- license: apache-2.0 ---
amanm27/bert-base-uncased-sports-scouting
amanm27
2022-03-10T07:12:38Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-10T07:07:45Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-uncased-sports-scouting results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-sports-scouting This model is a fine-tuned version of [amanm27/bert-base-uncased-sports](https://huggingface.co/amanm27/bert-base-uncased-sports) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5127 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 378 | 1.7194 | | 2.0165 | 2.0 | 756 | 1.5709 | | 1.6935 | 3.0 | 1134 | 1.5282 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0 - Datasets 1.18.3 - Tokenizers 0.11.0
amanm27/bert-base-uncased-wiki-scouting
amanm27
2022-03-10T07:05:36Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-10T07:00:47Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-uncased-wiki-scouting results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-wiki-scouting This model is a fine-tuned version of [amanm27/bert-base-uncased-wiki](https://huggingface.co/amanm27/bert-base-uncased-wiki) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5048 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 378 | 1.7017 | | 1.9945 | 2.0 | 756 | 1.5597 | | 1.6769 | 3.0 | 1134 | 1.5160 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0 - Datasets 1.18.3 - Tokenizers 0.11.0
amanm27/bert-base-uncased-wiki-sports
amanm27
2022-03-10T06:53:42Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-10T06:44:31Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-uncased-wiki-sports results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-wiki-sports This model is a fine-tuned version of [amanm27/bert-base-uncased-wiki](https://huggingface.co/amanm27/bert-base-uncased-wiki) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9753 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.3589 | 1.0 | 912 | 2.0686 | | 2.176 | 2.0 | 1824 | 2.0025 | | 2.1022 | 3.0 | 2736 | 1.9774 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0 - Datasets 1.18.3 - Tokenizers 0.11.0
kyleinincubated/autonlp-cat33-624317932
kyleinincubated
2022-03-10T06:10:56Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "zh", "dataset:kyleinincubated/autonlp-data-cat33", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-10T06:09:35Z
--- tags: autonlp language: zh widget: - text: "I love AutoNLP 🤗" datasets: - kyleinincubated/autonlp-data-cat33 co2_eq_emissions: 1.2490471218570545 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 624317932 - CO2 Emissions (in grams): 1.2490471218570545 ## Validation Metrics - Loss: 0.5579860806465149 - Accuracy: 0.8717391304347826 - Macro F1: 0.6625543939916455 - Micro F1: 0.8717391304347827 - Weighted F1: 0.8593303742671491 - Macro Precision: 0.7214757380849891 - Micro Precision: 0.8717391304347826 - Weighted Precision: 0.8629042654788023 - Macro Recall: 0.6540187758140144 - Micro Recall: 0.8717391304347826 - Weighted Recall: 0.8717391304347826 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/kyleinincubated/autonlp-cat33-624317932 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("kyleinincubated/autonlp-cat33-624317932", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("kyleinincubated/autonlp-cat33-624317932", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
ArnavL/twteval-pretrained
ArnavL
2022-03-10T04:52:52Z
3
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-10T04:10:45Z
--- license: mit --- # Pretrained Model BASE MODEL : BERT-BASE-UNCASED DATASET : [TWTEVAL SENTIMENT](https://huggingface.co/datasets/ArnavL/TWTEval-Pretraining-Processed)
aaraki/distilbert-base-uncased-finetuned-ner
aaraki
2022-03-10T01:42:14Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-10T01:29:51Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.8856800348735833 - name: Recall type: recall value: 0.9091620986687549 - name: F1 type: f1 value: 0.8972674579078112 - name: Accuracy type: accuracy value: 0.9774572259202186 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0788 - Precision: 0.8857 - Recall: 0.9092 - F1: 0.8973 - Accuracy: 0.9775 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2473 | 1.0 | 878 | 0.0788 | 0.8857 | 0.9092 | 0.8973 | 0.9775 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
amanm27/bert-base-uncased-scouting
amanm27
2022-03-10T00:40:07Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-10T00:27:42Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-uncased-scouting results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-scouting This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5443 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 378 | 1.7727 | | 2.1016 | 2.0 | 756 | 1.6040 | | 1.7298 | 3.0 | 1134 | 1.5572 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0 - Datasets 1.18.3 - Tokenizers 0.11.0
datarpit/toy
datarpit
2022-03-10T00:06:22Z
35
0
transformers
[ "transformers", "pytorch", "distilbert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-09T21:38:51Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: toy results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # toy This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2124 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.4798 | 1.0 | 231 | 0.2252 | | 0.3378 | 2.0 | 462 | 0.1777 | | 0.1024 | 3.0 | 693 | 0.1586 | | 0.0736 | 4.0 | 924 | 0.1664 | | 0.1237 | 5.0 | 1155 | 0.1692 | | 0.1049 | 6.0 | 1386 | 0.1818 | | 0.0239 | 7.0 | 1617 | 0.2127 | | 0.0036 | 8.0 | 1848 | 0.1888 | | 0.0051 | 9.0 | 2079 | 0.2061 | | 0.0003 | 10.0 | 2310 | 0.1905 | | 0.0005 | 11.0 | 2541 | 0.2011 | | 0.0003 | 12.0 | 2772 | 0.1928 | | 0.0029 | 13.0 | 3003 | 0.2563 | | 0.0002 | 14.0 | 3234 | 0.2076 | | 0.0002 | 15.0 | 3465 | 0.1980 | | 0.0001 | 16.0 | 3696 | 0.2013 | | 0.0001 | 17.0 | 3927 | 0.2089 | | 0.0001 | 18.0 | 4158 | 0.1984 | | 0.0001 | 19.0 | 4389 | 0.2017 | | 0.0001 | 20.0 | 4620 | 0.2013 | | 0.0001 | 21.0 | 4851 | 0.2142 | | 0.0001 | 22.0 | 5082 | 0.1943 | | 0.0001 | 23.0 | 5313 | 0.2003 | | 0.0 | 24.0 | 5544 | 0.2015 | | 0.0001 | 25.0 | 5775 | 0.2031 | | 0.0002 | 26.0 | 6006 | 0.2600 | | 0.0022 | 27.0 | 6237 | 0.2269 | | 0.0 | 28.0 | 6468 | 0.2125 | | 0.0 | 29.0 | 6699 | 0.2172 | | 0.0 | 30.0 | 6930 | 0.2185 | | 0.0 | 31.0 | 7161 | 0.2004 | | 0.0 | 32.0 | 7392 | 0.2077 | | 0.0 | 33.0 | 7623 | 0.2333 | | 0.0003 | 34.0 | 7854 | 0.2102 | | 0.0 | 35.0 | 8085 | 0.2095 | | 0.0 | 36.0 | 8316 | 0.2030 | | 0.0 | 37.0 | 8547 | 0.2038 | | 0.0 | 38.0 | 8778 | 0.2062 | | 0.0 | 39.0 | 9009 | 0.2080 | | 0.0 | 40.0 | 9240 | 0.2083 | | 0.0 | 41.0 | 9471 | 0.2063 | | 0.0 | 42.0 | 9702 | 0.2146 | | 0.0 | 43.0 | 9933 | 0.2168 | | 0.0 | 44.0 | 10164 | 0.2112 | | 0.0 | 45.0 | 10395 | 0.2109 | | 0.0 | 46.0 | 10626 | 0.2116 | | 0.0 | 47.0 | 10857 | 0.2122 | | 0.0 | 48.0 | 11088 | 0.2122 | | 0.0 | 49.0 | 11319 | 0.2124 | | 0.0 | 50.0 | 11550 | 0.2124 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0 - Datasets 1.18.3 - Tokenizers 0.11.6
megantosh/flair-arabic-MSA-aqmar
megantosh
2022-03-09T22:13:31Z
2
1
flair
[ "flair", "pytorch", "Text Classification", "token-classification", "sequence-tagger-model", "ar", "dataset:AQMAR", "dataset:ANERcorp", "license:apache-2.0", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: ar license: apache-2.0 datasets: - AQMAR - ANERcorp thumbnail: https://www.informatik.hu-berlin.de/en/forschung-en/gebiete/ml-en/resolveuid/a6f82e0d7fa446a59c902cac4cafa9cb/@@images/image/preview tags: - flair - Text Classification - token-classification - sequence-tagger-model metrics: - f1 widget: - text: "اختارها خيري بشارة كممثلة، دون سابقة معرفة أو تجربة تمثيلية، لتقف بجانب فاتن حمامة في فيلم «يوم مر ويوم حلو» (1988) وهي ما زالت شابة لم تتخطَ عامها الثاني" --- # Arabic NER Model for AQMAR dataset Training was conducted over 86 epochs, using a linear decaying learning rate of 2e-05, starting from 0.3 and a batch size of 48 with fastText and Flair forward and backward embeddings. ## Original Dataset: - [AQMAR](http://www.cs.cmu.edu/~ark/ArabicNER/) ## Results: - F1-score (micro) 0.9323 - F1-score (macro) 0.9272 | | True Posititves | False Positives | False Negatives | Precision | Recall | class-F1 | |------|-----|----|----|---------|--------|----------| | LOC | 164 | 7 | 13 | 0.9591 | 0.9266 | 0.9425 | | MISC | 398 | 22 | 37 | 0.9476 | 0.9149 | 0.9310 | | ORG | 65 | 6 | 9 | 0.9155 | 0.8784 | 0.8966 | | PER | 199 | 13 | 13 | 0.9387 | 0.9387 | 0.9387 | --- # Usage ```python from flair.data import Sentence from flair.models import SequenceTagger import pyarabic.araby as araby from icecream import ic arTagger = SequenceTagger.load('megantosh/flair-arabic-MSA-aqmar') sentence = Sentence('George Washington went to Washington .') arSentence = Sentence('عمرو عادلي أستاذ للاقتصاد السياسي المساعد في الجامعة الأمريكية بالقاهرة .') # predict NER tags tagger.predict(sentence) arTagger.predict(arSentence) # print sentence with predicted tags ic(sentence.to_tagged_string) ic(arSentence.to_tagged_string) ``` # Example see an example from a [similar NER model in Flair](https://huggingface.co/megantosh/flair-arabic-multi-ner) # Model Configuration ```python (embeddings): StackedEmbeddings( (list_embedding_0): WordEmbeddings('ar') (list_embedding_1): FlairEmbeddings( (lm): LanguageModel( (drop): Dropout(p=0.1, inplace=False) (encoder): Embedding(7125, 100) (rnn): LSTM(100, 2048) (decoder): Linear(in_features=2048, out_features=7125, bias=True) ) ) (list_embedding_2): FlairEmbeddings( (lm): LanguageModel( (drop): Dropout(p=0.1, inplace=False) (encoder): Embedding(7125, 100) (rnn): LSTM(100, 2048) (decoder): Linear(in_features=2048, out_features=7125, bias=True) ) ) ) (word_dropout): WordDropout(p=0.05) (locked_dropout): LockedDropout(p=0.5) (embedding2nn): Linear(in_features=4396, out_features=4396, bias=True) (rnn): LSTM(4396, 256, batch_first=True, bidirectional=True) (linear): Linear(in_features=512, out_features=14, bias=True) (beta): 1.0 (weights): None (weight_tensor) None )" 2021-03-31 22:19:50,654 ---------------------------------------------------------------------------------------------------- 2021-03-31 22:19:50,654 Corpus: "Corpus: 3025 train + 336 dev + 373 test sentences" 2021-03-31 22:19:50,654 ---------------------------------------------------------------------------------------------------- 2021-03-31 22:19:50,654 Parameters: 2021-03-31 22:19:50,654 - learning_rate: "0.3" 2021-03-31 22:19:50,654 - mini_batch_size: "48" 2021-03-31 22:19:50,654 - patience: "3" 2021-03-31 22:19:50,654 - anneal_factor: "0.5" 2021-03-31 22:19:50,654 - max_epochs: "150" 2021-03-31 22:19:50,654 - shuffle: "True" 2021-03-31 22:19:50,654 - train_with_dev: "False" 2021-03-31 22:19:50,654 - batch_growth_annealing: "False" 2021-03-31 22:19:50,655 ------------------------------------ ``` Due to the right-to-left in left-to-right context, some formatting errors might occur. and your code might appear like [this](https://ibb.co/ky20Lnq), (link accessed on 2020-10-27) # Citation *if you use this model, please consider citing [this work](https://www.researchgate.net/publication/358956953_Sequence_Labeling_Architectures_in_Diglossia_-_a_case_study_of_Arabic_and_its_dialects):* ```latex @unpublished{MMHU21 author = "M. Megahed", title = "Sequence Labeling Architectures in Diglossia", year = {2021}, doi = "10.13140/RG.2.2.34961.10084" url = {https://www.researchgate.net/publication/358956953_Sequence_Labeling_Architectures_in_Diglossia_-_a_case_study_of_Arabic_and_its_dialects} } ```
antho-data/distilbert-base-uncased-finetuned-emotion
antho-data
2022-03-09T21:27:17Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-09T20:30:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9235 - name: F1 type: f1 value: 0.9237367861627231 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2294 - Accuracy: 0.9235 - F1: 0.9237 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8637 | 1.0 | 250 | 0.3319 | 0.9075 | 0.9050 | | 0.2634 | 2.0 | 500 | 0.2294 | 0.9235 | 0.9237 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
EngNada/wav2vec2-large-xlsr-53-demo1
EngNada
2022-03-09T20:54:22Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-09T10:25:56Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xlsr-53-demo1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-53-demo1 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.9692 - Wer: 0.8462 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 5 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 10 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 12.978 | 0.06 | 100 | 3.5377 | 1.0 | | 3.5026 | 0.13 | 200 | 3.4366 | 1.0 | | 3.4084 | 0.19 | 300 | 3.3831 | 1.0 | | 3.3551 | 0.26 | 400 | 3.2563 | 1.0 | | 3.2668 | 0.32 | 500 | 3.2109 | 1.0 | | 2.9398 | 0.38 | 600 | 2.4548 | 0.9987 | | 2.2204 | 0.45 | 700 | 1.8870 | 1.0135 | | 1.7401 | 0.51 | 800 | 1.6816 | 1.0247 | | 1.5748 | 0.57 | 900 | 1.4741 | 0.9953 | | 1.4539 | 0.64 | 1000 | 1.4573 | 0.9852 | | 1.3612 | 0.7 | 1100 | 1.3534 | 0.9529 | | 1.3328 | 0.77 | 1200 | 1.3380 | 0.9320 | | 1.2459 | 0.83 | 1300 | 1.2984 | 0.9247 | | 1.1976 | 0.89 | 1400 | 1.2515 | 0.9252 | | 1.1593 | 0.96 | 1500 | 1.2345 | 0.9030 | | 1.1094 | 1.02 | 1600 | 1.2135 | 0.9305 | | 1.0485 | 1.09 | 1700 | 1.2045 | 0.9121 | | 0.9893 | 1.15 | 1800 | 1.1876 | 0.8990 | | 1.0099 | 1.21 | 1900 | 1.1663 | 0.8889 | | 0.982 | 1.28 | 2000 | 1.1674 | 0.8901 | | 0.9975 | 1.34 | 2100 | 1.1181 | 0.8812 | | 0.952 | 1.4 | 2200 | 1.1119 | 0.8817 | | 0.9311 | 1.47 | 2300 | 1.0786 | 0.8773 | | 0.9398 | 1.53 | 2400 | 1.1016 | 0.8720 | | 0.9148 | 1.6 | 2500 | 1.0878 | 0.8778 | | 0.9114 | 1.66 | 2600 | 1.1004 | 0.8712 | | 0.902 | 1.72 | 2700 | 1.0223 | 0.8744 | | 0.8978 | 1.79 | 2800 | 1.0616 | 0.8459 | | 0.8675 | 1.85 | 2900 | 1.0974 | 0.8643 | | 0.8373 | 1.92 | 3000 | 1.0389 | 0.8547 | | 0.8575 | 1.98 | 3100 | 1.0388 | 0.8480 | | 0.8313 | 2.04 | 3200 | 1.0001 | 0.8648 | | 0.7357 | 2.11 | 3300 | 1.0222 | 0.8705 | | 0.743 | 2.17 | 3400 | 1.0859 | 0.8765 | | 0.7306 | 2.23 | 3500 | 1.0109 | 0.8515 | | 0.7525 | 2.3 | 3600 | 0.9942 | 0.8619 | | 0.7308 | 2.36 | 3700 | 1.0004 | 0.8578 | | 0.7266 | 2.43 | 3800 | 1.0003 | 0.8497 | | 0.737 | 2.49 | 3900 | 1.0146 | 0.8505 | | 0.7202 | 2.55 | 4000 | 1.0172 | 0.8653 | | 0.6945 | 2.62 | 4100 | 0.9894 | 0.8415 | | 0.6633 | 2.68 | 4200 | 0.9894 | 0.8496 | | 0.6972 | 2.75 | 4300 | 0.9805 | 0.8505 | | 0.6872 | 2.81 | 4400 | 0.9939 | 0.8509 | | 0.7238 | 2.87 | 4500 | 0.9740 | 0.8532 | | 0.6847 | 2.94 | 4600 | 0.9692 | 0.8462 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
saksornr/mt-align-finetuned-LST-en-to-th
saksornr
2022-03-09T20:41:54Z
6
0
transformers
[ "transformers", "pytorch", "marian", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-09T07:45:26Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: mt-align-finetuned-LST-en-to-th results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt-align-finetuned-LST-en-to-th This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-mul](https://huggingface.co/Helsinki-NLP/opus-mt-en-mul) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | No log | 1.0 | 77 | 1.6042 | 13.1732 | 26.144 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4 - Tokenizers 0.11.6
hyechanjun/reverse-interview-question
hyechanjun
2022-03-09T18:57:52Z
5
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-09T18:52:33Z
An AI model that, given a statement, generates a question that would have likely resulted in said statement. Created for a Senior Project at Calvin University.
Noricum/wav2vec2-large-xls-r-300m-de-with-lm
Noricum
2022-03-09T18:14:21Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-08T15:45:28Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-large-xls-r-300m-de-with-lm results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-de-with-lm This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.9.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
petrichorRainbow/mrf-covid-bert
petrichorRainbow
2022-03-09T17:24:51Z
2
0
transformers
[ "transformers", "pytorch", "bert", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-09T16:59:51Z
--- license: apache-2.0 ---
petrichorRainbow/mrf-bert
petrichorRainbow
2022-03-09T17:12:06Z
3
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-09T16:59:17Z
--- license: apache-2.0 ---
orzhan/ruroberta-ruatd-binary
orzhan
2022-03-09T15:36:04Z
5
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-09T15:28:56Z
sberbank-ai/ruRoberta-large fine-tuned for Russian Artificial Text Detection shared task
jfealko/wav2vec2-large-xls-r-300m-irish-local
jfealko
2022-03-09T15:01:06Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-09T12:28:28Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-irish-local results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-irish-local This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 2.0788 - Wer: 0.7527 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 90 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 8.3839 | 2.94 | 50 | 3.3021 | 1.0 | | 3.0703 | 5.88 | 100 | 3.1749 | 1.0 | | 3.1744 | 8.82 | 150 | 3.0452 | 1.0 | | 2.9719 | 11.76 | 200 | 2.9767 | 1.0 | | 2.9539 | 14.71 | 250 | 2.9992 | 1.0 | | 2.9438 | 17.65 | 300 | 2.9767 | 1.0 | | 2.9296 | 20.59 | 350 | 2.9475 | 1.0 | | 2.9269 | 23.53 | 400 | 2.9402 | 1.0 | | 2.9116 | 26.47 | 450 | 2.9255 | 1.0 | | 2.8326 | 29.41 | 500 | 2.7238 | 1.0 | | 2.5758 | 32.35 | 550 | 2.3599 | 0.9900 | | 2.1242 | 35.29 | 600 | 1.8478 | 0.9491 | | 1.4603 | 38.24 | 650 | 1.5991 | 0.9002 | | 1.0287 | 41.18 | 700 | 1.5931 | 0.8434 | | 0.7687 | 44.12 | 750 | 1.6493 | 0.8253 | | 0.571 | 47.06 | 800 | 1.6889 | 0.8057 | | 0.4598 | 50.0 | 850 | 1.7521 | 0.7978 | | 0.3902 | 52.94 | 900 | 1.9074 | 0.7975 | | 0.318 | 55.88 | 950 | 1.9352 | 0.8133 | | 0.3026 | 58.82 | 1000 | 2.0157 | 0.8028 | | 0.2862 | 61.76 | 1050 | 1.9231 | 0.7720 | | 0.2696 | 64.71 | 1100 | 1.9256 | 0.7644 | | 0.2528 | 67.65 | 1150 | 2.0277 | 0.7741 | | 0.2051 | 70.59 | 1200 | 1.9921 | 0.7550 | | 0.2018 | 73.53 | 1250 | 2.0416 | 0.7615 | | 0.187 | 76.47 | 1300 | 2.0861 | 0.7635 | | 0.1749 | 79.41 | 1350 | 2.0926 | 0.7577 | | 0.1713 | 82.35 | 1400 | 2.0632 | 0.7533 | | 0.1518 | 85.29 | 1450 | 2.0903 | 0.7542 | | 0.16 | 88.24 | 1500 | 2.0788 | 0.7527 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
Narshion/mWACH_mBERT_System
Narshion
2022-03-09T13:49:35Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-09T12:28:12Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on mWACH NEO dataset. It achieves the following results on the evaluation set: - Loss: 1.6344 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.12.4 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
akshaychaudhary/distilbert-base-uncased-finetuned-combinedmodel1-ner
akshaychaudhary
2022-03-09T12:59:14Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-09T11:01:49Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-combinedmodel1-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-combinedmodel1-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3126 - Precision: 0.0289 - Recall: 0.1443 - F1: 0.0481 - Accuracy: 0.7058 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 312 | 1.5290 | 0.0431 | 0.2278 | 0.0725 | 0.6990 | | 0.1106 | 2.0 | 624 | 2.0923 | 0.0341 | 0.1722 | 0.0569 | 0.7041 | | 0.1106 | 3.0 | 936 | 2.3126 | 0.0289 | 0.1443 | 0.0481 | 0.7058 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
sanchit-gandhi/wav2vec2-2-rnd-2-layer
sanchit-gandhi
2022-03-09T09:50:11Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "speech-encoder-decoder", "automatic-speech-recognition", "generated_from_trainer", "dataset:librispeech_asr", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-08T10:17:31Z
--- tags: - generated_from_trainer datasets: - librispeech_asr model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model was trained from scratch on the librispeech_asr dataset. It achieves the following results on the evaluation set: - Loss: 5.2188 - Wer: 0.9238 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.7093 | 6.73 | 1500 | 5.7514 | 1.2104 | | 5.642 | 13.45 | 3000 | 5.2188 | 0.9238 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
merve/anime-faces-discriminator
merve
2022-03-09T09:12:21Z
0
0
keras
[ "keras", "tf-keras", "dcgan", "dataset:anime-faces", "license:apache-2.0", "region:us" ]
null
2022-03-04T16:47:21Z
--- license: apache-2.0 library_name: keras tags: - dcgan datasets: - anime-faces --- ## Model description Anime face discriminator model using [TensorFlow DCGAN example](https://www.tensorflow.org/tutorials/generative/dcgan). ## Training and evaluation data Model is trained on [anime faces dataset](https://huggingface.co/datasets/merve/anime-faces). ## Intended use and biases This model is not intended for production.
huggingtweets/aniraster_
huggingtweets
2022-03-09T09:03:20Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-09T09:02:38Z
--- language: en thumbnail: http://www.huggingtweets.com/aniraster_/1646816595677/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1460097593015472141/Yt6YwEU1_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Aniraster</div> <div style="text-align: center; font-size: 14px;">@aniraster_</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Aniraster. | Data | Aniraster | | --- | --- | | Tweets downloaded | 2581 | | Retweets | 169 | | Short tweets | 660 | | Tweets kept | 1752 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3nr4gbjn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @aniraster_'s tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3g7h1bov) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3g7h1bov/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/aniraster_') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
gsarti/it5-base-headline-generation
gsarti
2022-03-09T08:07:05Z
5
1
transformers
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "italian", "sequence-to-sequence", "newspaper", "ilgiornale", "repubblica", "headline-generation", "it", "dataset:gsarti/change_it", "arxiv:2203.03759", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - it license: apache-2.0 datasets: - gsarti/change_it tags: - italian - sequence-to-sequence - newspaper - ilgiornale - repubblica - headline-generation widget: - text: "WASHINGTON - La Corea del Nord torna dopo nove anni nella blacklist Usa degli Stati considerati sponsor del terrorismo. Come Iran, Siria e Sudan. Lo ha deciso Donald Trump , che ha preferito dare l'annuncio non durante il suo recente viaggio in Asia ma ieri, in una riunione del governo alla Casa Bianca. 'Oggi gli Stati Uniti designeranno la Corea del nord come uno stato sponsor del terrorismo', ha tuonato il tycoon, anticipando che sarà formalizzata oggi dal dipartimento di stato e sarà accompagnata da nuove e più severe sanzioni. 'Il livello più alto' mai imposto a Pyongyang, ha promesso. 'Avrebbe dovuto succedere molto tempo fa', ha aggiunto, scaricando per l'ennesima volta la responsabilità dell'attuale crisi sull'amministrazione Obama. Poi si è scagliato contro un 'regime assassino' che 'deve mettere fine allo sviluppo del suo programma illegale nucleare e balistico'. Per giustificare la svolta, Trump ha accusato Pyongyang non solo di 'minacciare il mondo con una devastazione nucleare' ma anche di aver 'ripetutamente sostenuto atti di terrorismo internazionale', compreso omicidi in suolo straniero. Il riferimento è all' uccisione all'aeroporto della capitale malese di Kim Jong Nam , il fratellastro del leader nordcoreano Kim Jong Un , ma non ci sono altri episodi noti. Tanto che alcuni esperti, come pure dirigenti Usa coperti dall'anonimato, dubitano che Pyongyang risponda ai criteri per una tale designazione. La mossa appare altamente simbolica, dato che la Corea del Nord è già pesantemente sanzionata a livello internazionale. Per il segretario di stato Rex Tillerson è solo l'ultima di una serie di passi per rafforzare la pressione su Pyongyang e costringerla a sedersi ad un tavolo perché gli Usa hanno sempre 'speranza nella diplomazia'. Ma nello stesso tempo è un monito per 'fermare e dissuadere' altri Paesi dal sostenere la Corea del Nord, finita nella blacklist 'anche per l'uso di armi chimiche'. Ma la mossa potrebbe anche essere controproducente, provocando una risposta di Kim o minando gli sforzi per sollecitare Pechino ad una maggiore pressione su Pyongyang. In ogni caso non aiuta il dialogo diretto tra Usa e Corea del Nord, che sembrava essere stato avviato in modo riservato. Come non aiutano gli scambi di insulti fra Trump e Kim. Nord Corea, Trump: 'Cerco di essere amico di Kim, sarebbe una bella cosa per il mondo'. Pyongyang era stata messa nella lista Usa degli Stati sponsor del terrorismo per aver fatto esplodere nel 1987 un volo della Korean Air uccidendo tutti i 115 passeggeri a bordo. Ma l'amministrazione di George W. Bush l'aveva rimossa sperando di far avanzare i negoziati sulla denuclearizzazione della penisola coreana. Il governo giapponese sostiene la decisione degli Stati Uniti di inserire la Corea del Nord nella lista degli stati che sponsorizzano il terrorismo, pur riconoscendo che l'annuncio potrebbe provocare una reazione immediata del regime di Pyongyang. Il premier Shinzo Abe ha accolto con consenso il comunicato Usa e ha detto alla stampa che servirà a incrementare la pressione sulla Corea del Nord. Il ministro della Difesa Itsunori Onodera , pur valutando positivamente la notifica, ha spiegato che si attendono azioni provocatorie dallo stato eremita, ribadendo che è vitale rimanere vigili. Secondo la stampa nipponica Abe aveva richiesto al dipartimento di Stato Usa di mettere la Corea del Nord sulla lista durante l'incontro col presidente Usa Donald Trump a Tokyo a inizio mese. L'ultimo lancio di missile balistico condotto da Pyongyang nell'oceano Pacifico, sorvolando il mare del Giappone, risale allo scorso settembre." - text: "ROMA - Una nuova droga killer è stata sequestrata per la prima volta in Europa dagli investigatori del Nas. Si tratta di una nuova \"miscela psicoattiva altamente tossica\" per la prima volta individuata da forze di polizia, simile all'eroina sintetica, ma molto più economica e letale. Tanto che i 20 grammi scoperti sarebbero stati sufficienti per fabbricare ben 20.000 dosi e lo stesso contatto attraverso la pelle può provocare intossicazione. Individuata per la prima volta, la nuova droga presenta una struttura simile al farmaco sedativo Fentanyl ma con effetti molto più devastanti per l'organismo. Proveniva dell'estero ed era contenuta in un plico postale indirizzato in una città del centro Italia: è stata intercettata tramite accertamenti sul web grazie a un'operazione di intelligence che ha visto come protagonisti i militari della Sezione operativa centrale del Comando carabinieri per la Tutela della salute (Nas). Economica e letale, secondo gli investigatori \"in confronto l'eroina è quasi 'acqua fresca', anzi, proprio per la sua economicità, in alcuni casi viene venduta dai pusher a giovani conviti di comprare eroina\". La diffusione di nuove droghe sintetiche che continuamente appaiono sui mercati necessita di un'attività investigativa costante e complessa. Si tratta infatti di sostanze dalla struttura molecolare molto simile a quella del Fentanyl ma ogni volta leggermente diversa. Di qui la difficoltà di individuarle e l'importanza del nuovo sequestro. \"La chiamano impropriamente 'eroina sintetica' - spiega il comandante dei Nas, generale Adelmo Lusi - per il tipo di effetto psicotropo simile, ma dal punto di vista della tossicità è molto peggio: con 25 milligrammi di eroina ci si sballa, con 25mg di simil-fentanyl, come quello appena sequestrato, si muore\". Le indagini sono partite da ricoveri per overdose in ospedale, in cui arrivavano ragazzi che non rispondevano al trattamento disintossicante per l'eroina. La nuova sostanza verrà ora segnalata per l'inserimento tra le tabelle ministeriali degli stupefacenti prevista dal Dpr 309/1990." - text: "Fragile come il burro. Il nostro territorio è precario. Ne sanno qualcosa i comuni che sono stati investititi dal maltempo . Il dissesto idrogeologico imperversa su tutto il territorio. Infatti, oltre 6.600 comuni , pari all’82% del totale, sono in aree ad elevato rischio idrogeologico, pari al 10% della sua superficie. La popolazione potenzialmente esposta è stimata in 5,8 milioni di persone. I dati emergono dalle recenti analisi fatte da Legambiente e Protezione civile, che mettono in evidenza come in 10 anni in Italia sia raddoppiata l’area dei territori colpiti da alluvioni e frane , passando da una media di quattro regioni all’anno a otto regioni. Nella classifica delle regioni a maggior rischio idrogeologico prima è la Calabria con il 100% dei comuni esposti; al 100% ci sono anche la provincia di Trento, il Molise, la Basilicata, l’Umbria, la Valle d’Aosta. Poi Marche, Liguria al 99%; Lazio, Toscana al 98%; Abruzzo (96%), Emilia-Romagna (95%), Campania e Friuli Venezia Giulia al 92%, Piemonte (87%), Sardegna (81%), Puglia (78%), Sicilia (71%), Lombardia (60%), provincia di Bolzano (59%), Veneto (56%). Tra le cause che condizionano ed amplificano il rischio idrogeologico c’è l’azione dell’uomo (abbandono e degrado, cementificazione, consumo di suolo, abusivismo, disboscamento e incendi). Ma anche e soprattutto la mancanza di una seria manutenzione ordinaria e non ad una organica politica di prevenzione." - text: "Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\"." metrics: - rouge - bertscore model-index: - name: it5-base-headline-generation results: - task: type: headline-generation name: "Headline generation" dataset: type: headgen_it name: "HeadGen-IT" metrics: - type: rouge1 value: 0.310 name: "Test Rouge1" - type: rouge2 value: 0.112 name: "Test Rouge2" - type: rougeL value: 0.270 name: "Test RougeL" - type: bertscore value: 0.433 name: "Test BERTScore" args: - model_type: "dbmdz/bert-base-italian-xxl-uncased" - lang: "it" - num_layers: 10 - rescale_with_baseline: True - baseline_path: "bertscore_baseline_ita.tsv" co2_eq_emissions: emissions: "17g" source: "Google Cloud Platform Carbon Footprint" training_type: "fine-tuning" geographical_location: "Eemshaven, Netherlands, Europe" hardware_used: "1 TPU v3-8 VM" --- # IT5 Base for News Headline Generation 🗞️ 🇮🇹 This repository contains the checkpoint for the [IT5 Base](https://huggingface.co/gsarti/it5-base) model fine-tuned on news headline generation on the Italian HeadGen-IT dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines hg = pipeline("text2text-generation", model='it5/it5-base-headline-generation') hg("Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\".") >>> [{"generated_text": "il nazionalista rajoy: 'voteremo la sfiducia'"}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/it5-base-headline-generation") model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-base-headline-generation") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ```
gsarti/it5-base-wiki-summarization
gsarti
2022-03-09T08:06:40Z
16
0
transformers
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "italian", "sequence-to-sequence", "wikipedia", "summarization", "wits", "it", "dataset:wits", "arxiv:2203.03759", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- language: - it license: apache-2.0 datasets: - wits tags: - italian - sequence-to-sequence - wikipedia - summarization - wits widget: - text: "La 5ª Commissione ha competenza per i disegni di legge riguardanti le specifiche materie del bilancio, del personale e dei servizi del Ministero dell'economia, nonché per i disegni di legge riguardanti la materia finanziaria. La Commissione è composta da 26 senatori (di cui 2 segretari, 2 vicepresidenti di cui 1 componente esterno, e un presidente) scelti in modo omogeneo tra i componenti di quel ramo del Parlamento, in modo da rispecchiarne le forze politiche presenti. Essi sono scelti dai gruppi parlamentari (e non dal Presidente, come invece accade per l'organismo della Giunta parlamentare): per la nomina dei membri ciascun Gruppo, entro cinque giorni dalla propria costituzione, procede, dandone comunicazione alla Presidenza del Senato, alla designazione dei propri rappresentanti nelle singole Commissioni permanenti. Ogni senatore chiamato a far parte del governo o eletto presidente della Commissione è, per la durata della carica, sostituito dal suo gruppo nella Commissione con un altro senatore, che continuerà ad appartenere anche alla Commissione di provenienza. Tranne in rari casi nessun Senatore può essere assegnato a più di una Commissione permanente. Le Commissioni permanenti sono rinnovate dopo il primo biennio della legislatura ed i loro componenti possono essere confermati." - text: "Interni della chiesa Si pensa che già ai tempi di Gediminas vi fosse una piccola chiesa, probabilmente in legno. Nel 1408 circa Vitoldo costruì la chiesa dello Spirito Santo che andò in seguito ampliata. Nel 1501 Alessandro Jagellone lo donò al monastero domenicano, il più antico della Lituania, che nel 1679-88 fu ampliato e ricostruito. Di quel periodo sopravvivono le mura della chiesa, mentre l'arredamento interno fu realizzato nel 1749-1770 e la cupola affrontò dei lavori di restauro nel 1752-1760. Nel 1844 le autorità zariste chiusero il monastero e la chiesa divenne parrocchiale. Oggi serve la comunità polacca di Vilnius. Su via Šv. Ignoto fu fondato un monastero domenicano nel 1501. Come molti altri edifici, questo monastero fu convertito in una prigione dalle autorità zariste nel 1807. Costituì un luogo di prigionia per molti patrioti lituani, nello specifico i Filareti, i quali parteciparono alle rivolte del 1831 e del 1863. Organo La chiesa si trova lateralmente rispetto alla strada e non ha una facciata principale ben disegnata. L'altezza, inclusa la cupola, è di 51 m. La parte inferiore della facciata (con piccole torri gemelle) è ricoperta da edifici conventuali e l'esterno presenta caratteristiche architettoniche tipiche del tardo barocco. Celebre per i fantasiosi ornamenti rococò, l'interno della chiesa è tra i più celebri della Lituania per via dei cartigli con vari stemmi e affreschi lungo la navata: vi sono 16 altari nella chiesa. Gli altari e il pulpito sono assai decorati con sculture e ornamenti rotondi e in rilievo. Tra gli affreschi barocchi, si pensi alla composizione multi-figurale intitolata ''Apoteosi dello Spirito Santo'' (neobarocco, XIX secolo) nella cupola, 45 dipinti nella chiesa (tra cui un'immagine di Santa Barbara con un'ambientazione del XVII o XVIII secolo, una di Santa Caterina da Siena in stile rococò di Szymon Czechowicz, un ritratto di Alessandro Jagellone di un artista sconosciuto della seconda metà del XVIII secolo). Un ingresso sotto l'altare conduce alle grandi volte, labirintiche, con molte stanze e cripte: i sotterranei ospitano i resti di centinaia di residenti di Vilnius, alcuni dei quali mummificatisi naturalmente, e sono circondati da leggende metropolitane. Sebbene l'esistenza dei sotterranei fosse nota, i primi sforzi per esplorare e mappare le cripte furono abbandonate nonostante lo sforzo degli studenti dell'Università di Vilnius negli anni '30. Tuttavia, questi ultimi non avevano osservato le corrette procedure archeologiche e causarono infatti molti danni: il modus operandi prevedeva lo smistamento delle ossa ponendo tutti i teschi sugli scaffali e rimuovendoli le tombe. Da allora, i resti sono stati spostati molte volte lasciandoli in uno stato casuale e disorganizzato. Stando alle leggende che aleggiano sul luogo, i resti sarebbero di soldati francesi recatisi in città nel corso della campagna di Russia del 1812 avviata da Napoleone Bonaparte, di vittime dell'Inquisizione o della peste nera. Più romantiche risultano le affermazioni di chi sostiene che i corridoi sotterranei facevano parte di una rete di passaggi più ampia che consentiva agli amanti leggendari Barbara Radziwiłł e Sigismondo II Augusto di incontrarsi in segreto. Nel 2011, gli antropologi dell'Università di Vilnius, guidati da Rimantas Jankauskas, avviarono uno studio sui corpi mummificati, stimando settimane dopo che le volte conservassero i resti di circa 600 persone, tra cui molte donne e bambini dalla metà del XVIII secolo all'inizio del XIX secolo. Il team ha selezionato i cadaveri meglio conservati e ha eseguito la loro tomografia. I risultati mostrano che molte persone erano in sovrappeso e avevano l'alluce valgo, il che ha portato alla conclusione che si trattava di alti borghesi o comunque di cittadini abbienti. " - text: "Le dimensioni dell'isola sono di 8 km di lunghezza e di 3,2 km di larghezza. Si trova a 1,6 km a sud-est dell'isola di Renaud, dalla quale è separata dal passaggio Rodman. La sua altezza è di 100 m. Fu scoperta dall'esploratore e baleniere britannico John Biscoe nel 1832 e venne mappata durante una spedizione antartica francese realizzata nel primo decennio del XX secolo. Al comando della spedizione era Jean-Baptiste Charcot e il nome fu scelto per onorare l'esploratore e geografo francese Charles Rabot. === Rivendicazioni territoriali === * Secondo l'Argentina appartiene al dipartimento dell'Antartide Argentina nella provincia della Terra del Fuoco. * Secondo il Cile appartiene al comune antartico della provincia cilena antartica nella regione di Magallanes e dell'Antartico cileno. * Secondo il Regno Unito fa parte del territorio antartico britannico. Per il Trattato Antartico tali rivendicazioni sono sospese. Sull'isola è presente il rifugio Guillochon, sito storico antartico. " - text: "Vanni ha la sua prima mostra personale nel 1948, alla Galleria Margherita di Roma. Nel 1949 vince una borsa di studio che lo porterà a studiare ad Amsterdam sotto la guida del pittore neoplastico Friedrich Vordemberge-Gildewart. Nel 1952 vince una Fulbright Scholarship che lo porterà a studiare in America, alla Yale University, sotto la guida di Josef Albers. Dal 1953 al 1960 si stabilisce a Parigi, dove illustra alcuni libri per bambini che in seguito vinceranno il premio del Club des Editeurs. Nel 1954 lavora come consulente del colore per il documentario su Picasso di Luciano Emmer, e nel 1955 comincia la sua lunga collaborazione con la Galleria Schneider, affiancando artisti come Corrado Cagli. Dal 1969 al 1974 lavora su dei bassorilievi in vetro resina sui quali vengono proiettati dei film astratti da lui creati, per creare dei quadri che si trasformino continuamente nel tempo. Nel 1979 lascia Roma per stabilirsi a New York, dove alla carriera di pittore affiancherà quella di professore per la prestigiosa Cooper Union School of Art, dove insegnerà ininterrottamente dal 1984 al 2014. L'opera pittorica di Vanni è segnata da una visione estremamente personale, lontana dalle correnti e dai movimenti che hanno caratterizzato la seconda metà del XX secolo. Memore delle lunghe conversazioni avute da Vanni nella sua primissima gioventù, con il filosofo e pittore futurista Alberto Bragaglia, le sue opere sono contrassegnate da un “eclettismo” formale programmatico, alla base del quale resta costante una conoscenza profonda delle molteplici tecniche artistiche utilizzate (tra cui il mosaico, l’affresco e la tempera ad uovo). Pur esprimendosi per lo più in cicli di opere dove l’astrazione formale è la principale componente figurativa, sono da sottolineare alcune opere dove Vanni ha dato prova di una importante padronanza dell’arte figurativa. Importanti e numerose sono le sue realizzazioni anche nel campo dell’illustrazione. Sue sono le illustrazioni per la novella ''Agostino'' di Alberto Moravia, per il libro ''Love'' di Lowell A. Siff e delle ''Contes de Cristal'' di Alice Coléno. Ha tenuto mostre personali in Italia e all’estero ed esposto in mostre collettive di rappresentanza italiana nei musei e nelle gallerie di ogni parte del mondo. " metrics: - rouge - bertscore model-index: - name: it5-base-wiki-summarization results: - task: type: wiki-summarization name: "Wikipedia Summarization" dataset: type: wits name: "WITS" metrics: - type: rouge1 value: 0.369 name: "Test Rouge1" - type: rouge2 value: 0.217 name: "Test Rouge2" - type: rougeL value: 0.333 name: "Test RougeL" - type: bertscore value: 0.530 name: "Test BERTScore" args: - model_type: "dbmdz/bert-base-italian-xxl-uncased" - lang: "it" - num_layers: 10 - rescale_with_baseline: True - baseline_path: "bertscore_baseline_ita.tsv" co2_eq_emissions: emissions: "17g" source: "Google Cloud Platform Carbon Footprint" training_type: "fine-tuning" geographical_location: "Eemshaven, Netherlands, Europe" hardware_used: "1 TPU v3-8 VM" thumbnail: https://gsarti.com/publication/it5/featured.png --- # IT5 Base for Wikipedia Summarization 📑 🇮🇹 This repository contains the checkpoint for the [IT5 Base](https://huggingface.co/gsarti/it5-base) model fine-tuned on Wikipedia summarization on the [WITS](https://www.semanticscholar.org/paper/WITS%3A-Wikipedia-for-Italian-Text-Summarization-Casola-Lavelli/ad6c83122e721c7c0db4a40727dac3b4762cd2b1) dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines hg = pipeline("text2text-generation", model='it5/it5-base-wiki-summarization') hg("Le dimensioni dell'isola sono di 8 km di lunghezza e di 3,2 km di larghezza. Si trova a 1,6 km a sud-est dell'isola di Renaud, dalla quale è separata dal passaggio Rodman. La sua altezza è di 100 m. Fu scoperta dall'esploratore e baleniere britannico John Biscoe nel 1832 e venne mappata durante una spedizione antartica francese realizzata nel primo decennio del XX secolo. Al comando della spedizione era Jean-Baptiste Charcot e il nome fu scelto per onorare l'esploratore e geografo francese Charles Rabot. === Rivendicazioni territoriali === * Secondo l'Argentina appartiene al dipartimento dell'Antartide Argentina nella provincia della Terra del Fuoco. * Secondo il Cile appartiene al comune antartico della provincia cilena antartica nella regione di Magallanes e dell'Antartico cileno. * Secondo il Regno Unito fa parte del territorio antartico britannico. Per il Trattato Antartico tali rivendicazioni sono sospese. Sull'isola è presente il rifugio Guillochon, sito storico antartico. " - text: "Vanni ha la sua prima mostra personale nel 1948, alla Galleria Margherita di Roma. Nel 1949 vince una borsa di studio che lo porterà a studiare ad Amsterdam sotto la guida del pittore neoplastico Friedrich Vordemberge-Gildewart. Nel 1952 vince una Fulbright Scholarship che lo porterà a studiare in America, alla Yale University, sotto la guida di Josef Albers. Dal 1953 al 1960 si stabilisce a Parigi, dove illustra alcuni libri per bambini che in seguito vinceranno il premio del Club des Editeurs. Nel 1954 lavora come consulente del colore per il documentario su Picasso di Luciano Emmer, e nel 1955 comincia la sua lunga collaborazione con la Galleria Schneider, affiancando artisti come Corrado Cagli. Dal 1969 al 1974 lavora su dei bassorilievi in vetro resina sui quali vengono proiettati dei film astratti da lui creati, per creare dei quadri che si trasformino continuamente nel tempo. Nel 1979 lascia Roma per stabilirsi a New York, dove alla carriera di pittore affiancherà quella di professore per la prestigiosa Cooper Union School of Art, dove insegnerà ininterrottamente dal 1984 al 2014. L'opera pittorica di Vanni è segnata da una visione estremamente personale, lontana dalle correnti e dai movimenti che hanno caratterizzato la seconda metà del XX secolo. Memore delle lunghe conversazioni avute da Vanni nella sua primissima gioventù, con il filosofo e pittore futurista Alberto Bragaglia, le sue opere sono contrassegnate da un “eclettismo” formale programmatico, alla base del quale resta costante una conoscenza profonda delle molteplici tecniche artistiche utilizzate (tra cui il mosaico, l’affresco e la tempera ad uovo). Pur esprimendosi per lo più in cicli di opere dove l’astrazione formale è la principale componente figurativa, sono da sottolineare alcune opere dove Vanni ha dato prova di una importante padronanza dell’arte figurativa. Importanti e numerose sono le sue realizzazioni anche nel campo dell’illustrazione. Sue sono le illustrazioni per la novella ''Agostino'' di Alberto Moravia, per il libro ''Love'' di Lowell A. Siff e delle ''Contes de Cristal'' di Alice Coléno. Ha tenuto mostre personali in Italia e all’estero ed esposto in mostre collettive di rappresentanza italiana nei musei e nelle gallerie di ogni parte del mondo.") >>> [{"generated_text": "L' '''isola di Rabot''' si trova in prossimità dell'isola di Renaud, a sud dell'Argentina."}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/it5-base-wiki-summarization") model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-base-wiki-summarization") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ```
gsarti/it5-base-question-generation
gsarti
2022-03-09T08:06:11Z
5
0
transformers
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "italian", "sequence-to-sequence", "question-generation", "squad_it", "it", "dataset:squad_it", "arxiv:2203.03759", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - it license: apache-2.0 datasets: - squad_it tags: - italian - sequence-to-sequence - question-generation - squad_it - text2text-generation widget: - text: "Le conoscenze mediche erano stagnanti durante il Medioevo. Il resoconto più autorevole di allora è venuto dalla facoltà di medicina di Parigi in un rapporto al re di Francia che ha incolpato i cieli, sotto forma di una congiunzione di tre pianeti nel 1345 che causò una \"grande pestilenza nell' aria\". Questa relazione è diventata la prima e più diffusa di una serie di casi di peste che cercava di dare consigli ai malati. Che la peste fosse causata dalla cattiva aria divenne la teoria più accettata. Oggi, questo è conosciuto come la teoria di Miasma. La parola \"peste\" non aveva un significato particolare in questo momento, e solo la ricorrenza dei focolai durante il Medioevo gli diede il nome che è diventato il termine medico. Risposta: re di Francia" - text: "Il 14 aprile 2011, ABC ha annullato le lunghe opere di sapone All My Children e One Life to Live dopo 41 e 43 anni in onda, rispettivamente (in seguito al contraccolpo dei tifosi, ABC ha venduto i diritti ad entrambi gli spettacoli a Prospect Park, che alla fine ha rilanciato i saponi su Hulu per un' ulteriore stagione nel 2013 e con entrambe le società che si citano in giudizio per accuse di interferenza con il processo di rilancio degli spettacoli, mancato pagamento delle tasse di licenza. Il talk/lifestyle show che ha sostituito One Life to Live, The Revolution, non è riuscito a generare giudizi soddisfacenti ed è stato a sua volta annullato dopo soli sette mesi. La stagione 2011-12 ha visto l' ABC cadere al quarto posto nel 18-49 demografico nonostante rinnovando una manciata di nuovi spettacoli (compresi i drammi matricole Scandal, Revenge e Once Upon a Time) per la seconda stagione. Risposta: Hulu" - text: "L' American Broadcasting Company (ABC) (stlized nel suo logo come abc dal 1957) è una rete televisiva commerciale americana trasmissione televisiva che è di proprietà del Disney-ABC Television Group, una controllata della divisione Disney Media Networks di The Walt Disney Company. La rete fa parte delle grandi reti televisive Big Three. La rete ha sede a Columbus Avenue e West 66th Street a Manhattan, con ulteriori uffici e stabilimenti di produzione a New York City, Los Angeles e Burbank, California. Risposta: Manhattan" - text: "La disobbedienza civile non rivoluzionaria è una semplice disobbedienza delle leggi sulla base del fatto che sono giudicate \"sbagliate\" da una coscienza individuale, o come parte di uno sforzo per rendere alcune leggi inefficaci, per causarne l' abrogazione, o per esercitare pressioni per ottenere i propri desideri politici su qualche altra questione. La disobbedienza civile rivoluzionaria è più che altro un tentativo attivo di rovesciare un governo (o di cambiare le tradizioni culturali, i costumi sociali, le credenze religiose, ecc. La rivoluzione non deve necessariamente essere politica, cioè \"rivoluzione culturale\", implica semplicemente un cambiamento radicale e diffuso in una sezione del tessuto sociale). Gli atti di Gandhi sono stati descritti come disobbedienza civile rivoluzionaria. È stato affermato che gli ungheresi sotto Ferenc Deák hanno diretto una disobbedienza civile rivoluzionaria contro il governo austriaco. Thoreau ha anche scritto di disobbedienza civile realizzando \"rivoluzione pacifica\". Howard Zinn, Harvey Wheeler e altri hanno identificato il diritto sposato nella Dichiarazione d' Indipendenza di \"alterare o abolire\" un governo ingiusto come principio di disobbedienza civile. Risposta: Ferenc Deák" metrics: - rouge - bertscore model-index: - name: it5-base-question-generation results: - task: type: question-generation name: "Question generation" dataset: type: squad_it name: "SQuAD-IT" metrics: - type: rouge1 value: 0.382 name: "Test Rouge1" - type: rouge2 value: 0.199 name: "Test Rouge2" - type: rougeL value: 0.354 name: "Test RougeL" - type: bertscore value: 0.516 name: "Test BERTScore" args: - model_type: "dbmdz/bert-base-italian-xxl-uncased" - lang: "it" - num_layers: 10 - rescale_with_baseline: True - baseline_path: "bertscore_baseline_ita.tsv" co2_eq_emissions: emissions: "17g" source: "Google Cloud Platform Carbon Footprint" training_type: "fine-tuning" geographical_location: "Eemshaven, Netherlands, Europe" hardware_used: "1 TPU v3-8 VM" thumbnail: https://gsarti.com/publication/it5/featured.png --- # IT5 Base for Question Generation 💭 🇮🇹 This repository contains the checkpoint for the [IT5 Base](https://huggingface.co/gsarti/it5-base) model fine-tuned on question generation on the [SQuAD-IT corpus](https://huggingface.co/datasets/squad_it) as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines qg = pipeline("text2text-generation", model='it5/it5-base-question-generation') qg("Le conoscenze mediche erano stagnanti durante il Medioevo. Il resoconto più autorevole di allora è venuto dalla facoltà di medicina di Parigi in un rapporto al re di Francia che ha incolpato i cieli, sotto forma di una congiunzione di tre pianeti nel 1345 che causò una "grande pestilenza nell\' aria". Questa relazione è diventata la prima e più diffusa di una serie di casi di peste che cercava di dare consigli ai malati. Che la peste fosse causata dalla cattiva aria divenne la teoria più accettata. Oggi, questo è conosciuto come la teoria di Miasma. La parola "peste" non aveva un significato particolare in questo momento, e solo la ricorrenza dei focolai durante il Medioevo gli diede il nome che è diventato il termine medico. Risposta: re di Francia") >>> [{"generated_text": "Per chi è stato redatto il referto medico?"}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/it5-base-question-generation") model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-base-question-generation") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ```
gsarti/it5-small-ilgiornale-to-repubblica
gsarti
2022-03-09T08:03:52Z
7
0
transformers
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "italian", "sequence-to-sequence", "newspaper", "ilgiornale", "repubblica", "style-transfer", "it", "dataset:gsarti/change_it", "arxiv:2203.03759", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - it license: apache-2.0 datasets: - gsarti/change_it tags: - italian - sequence-to-sequence - newspaper - ilgiornale - repubblica - style-transfer widget: - text: "WASHINGTON - La Corea del Nord torna dopo nove anni nella blacklist Usa degli Stati considerati sponsor del terrorismo. Come Iran, Siria e Sudan. Lo ha deciso Donald Trump , che ha preferito dare l'annuncio non durante il suo recente viaggio in Asia ma ieri, in una riunione del governo alla Casa Bianca. 'Oggi gli Stati Uniti designeranno la Corea del nord come uno stato sponsor del terrorismo', ha tuonato il tycoon, anticipando che sarà formalizzata oggi dal dipartimento di stato e sarà accompagnata da nuove e più severe sanzioni. 'Il livello più alto' mai imposto a Pyongyang, ha promesso. 'Avrebbe dovuto succedere molto tempo fa', ha aggiunto, scaricando per l'ennesima volta la responsabilità dell'attuale crisi sull'amministrazione Obama. Poi si è scagliato contro un 'regime assassino' che 'deve mettere fine allo sviluppo del suo programma illegale nucleare e balistico'. Per giustificare la svolta, Trump ha accusato Pyongyang non solo di 'minacciare il mondo con una devastazione nucleare' ma anche di aver 'ripetutamente sostenuto atti di terrorismo internazionale', compreso omicidi in suolo straniero. Il riferimento è all' uccisione all'aeroporto della capitale malese di Kim Jong Nam , il fratellastro del leader nordcoreano Kim Jong Un , ma non ci sono altri episodi noti. Tanto che alcuni esperti, come pure dirigenti Usa coperti dall'anonimato, dubitano che Pyongyang risponda ai criteri per una tale designazione. La mossa appare altamente simbolica, dato che la Corea del Nord è già pesantemente sanzionata a livello internazionale. Per il segretario di stato Rex Tillerson è solo l'ultima di una serie di passi per rafforzare la pressione su Pyongyang e costringerla a sedersi ad un tavolo perché gli Usa hanno sempre 'speranza nella diplomazia'. Ma nello stesso tempo è un monito per 'fermare e dissuadere' altri Paesi dal sostenere la Corea del Nord, finita nella blacklist 'anche per l'uso di armi chimiche'. Ma la mossa potrebbe anche essere controproducente, provocando una risposta di Kim o minando gli sforzi per sollecitare Pechino ad una maggiore pressione su Pyongyang. In ogni caso non aiuta il dialogo diretto tra Usa e Corea del Nord, che sembrava essere stato avviato in modo riservato. Come non aiutano gli scambi di insulti fra Trump e Kim. Nord Corea, Trump: 'Cerco di essere amico di Kim, sarebbe una bella cosa per il mondo'. Pyongyang era stata messa nella lista Usa degli Stati sponsor del terrorismo per aver fatto esplodere nel 1987 un volo della Korean Air uccidendo tutti i 115 passeggeri a bordo. Ma l'amministrazione di George W. Bush l'aveva rimossa sperando di far avanzare i negoziati sulla denuclearizzazione della penisola coreana. Il governo giapponese sostiene la decisione degli Stati Uniti di inserire la Corea del Nord nella lista degli stati che sponsorizzano il terrorismo, pur riconoscendo che l'annuncio potrebbe provocare una reazione immediata del regime di Pyongyang. Il premier Shinzo Abe ha accolto con consenso il comunicato Usa e ha detto alla stampa che servirà a incrementare la pressione sulla Corea del Nord. Il ministro della Difesa Itsunori Onodera , pur valutando positivamente la notifica, ha spiegato che si attendono azioni provocatorie dallo stato eremita, ribadendo che è vitale rimanere vigili. Secondo la stampa nipponica Abe aveva richiesto al dipartimento di Stato Usa di mettere la Corea del Nord sulla lista durante l'incontro col presidente Usa Donald Trump a Tokyo a inizio mese. L'ultimo lancio di missile balistico condotto da Pyongyang nell'oceano Pacifico, sorvolando il mare del Giappone, risale allo scorso settembre." - text: "ROMA - Una nuova droga killer è stata sequestrata per la prima volta in Europa dagli investigatori del Nas. Si tratta di una nuova \"miscela psicoattiva altamente tossica\" per la prima volta individuata da forze di polizia, simile all'eroina sintetica, ma molto più economica e letale. Tanto che i 20 grammi scoperti sarebbero stati sufficienti per fabbricare ben 20.000 dosi e lo stesso contatto attraverso la pelle può provocare intossicazione. Individuata per la prima volta, la nuova droga presenta una struttura simile al farmaco sedativo Fentanyl ma con effetti molto più devastanti per l'organismo. Proveniva dell'estero ed era contenuta in un plico postale indirizzato in una città del centro Italia: è stata intercettata tramite accertamenti sul web grazie a un'operazione di intelligence che ha visto come protagonisti i militari della Sezione operativa centrale del Comando carabinieri per la Tutela della salute (Nas). Economica e letale, secondo gli investigatori \"in confronto l'eroina è quasi 'acqua fresca', anzi, proprio per la sua economicità, in alcuni casi viene venduta dai pusher a giovani conviti di comprare eroina\". La diffusione di nuove droghe sintetiche che continuamente appaiono sui mercati necessita di un'attività investigativa costante e complessa. Si tratta infatti di sostanze dalla struttura molecolare molto simile a quella del Fentanyl ma ogni volta leggermente diversa. Di qui la difficoltà di individuarle e l'importanza del nuovo sequestro. \"La chiamano impropriamente 'eroina sintetica' - spiega il comandante dei Nas, generale Adelmo Lusi - per il tipo di effetto psicotropo simile, ma dal punto di vista della tossicità è molto peggio: con 25 milligrammi di eroina ci si sballa, con 25mg di simil-fentanyl, come quello appena sequestrato, si muore\". Le indagini sono partite da ricoveri per overdose in ospedale, in cui arrivavano ragazzi che non rispondevano al trattamento disintossicante per l'eroina. La nuova sostanza verrà ora segnalata per l'inserimento tra le tabelle ministeriali degli stupefacenti prevista dal Dpr 309/1990." - text: "Fragile come il burro. Il nostro territorio è precario. Ne sanno qualcosa i comuni che sono stati investititi dal maltempo . Il dissesto idrogeologico imperversa su tutto il territorio. Infatti, oltre 6.600 comuni , pari all’82% del totale, sono in aree ad elevato rischio idrogeologico, pari al 10% della sua superficie. La popolazione potenzialmente esposta è stimata in 5,8 milioni di persone. I dati emergono dalle recenti analisi fatte da Legambiente e Protezione civile, che mettono in evidenza come in 10 anni in Italia sia raddoppiata l’area dei territori colpiti da alluvioni e frane , passando da una media di quattro regioni all’anno a otto regioni. Nella classifica delle regioni a maggior rischio idrogeologico prima è la Calabria con il 100% dei comuni esposti; al 100% ci sono anche la provincia di Trento, il Molise, la Basilicata, l’Umbria, la Valle d’Aosta. Poi Marche, Liguria al 99%; Lazio, Toscana al 98%; Abruzzo (96%), Emilia-Romagna (95%), Campania e Friuli Venezia Giulia al 92%, Piemonte (87%), Sardegna (81%), Puglia (78%), Sicilia (71%), Lombardia (60%), provincia di Bolzano (59%), Veneto (56%). Tra le cause che condizionano ed amplificano il rischio idrogeologico c’è l’azione dell’uomo (abbandono e degrado, cementificazione, consumo di suolo, abusivismo, disboscamento e incendi). Ma anche e soprattutto la mancanza di una seria manutenzione ordinaria e non ad una organica politica di prevenzione." - text: "Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\"." metrics: - rouge - bertscore - headline-headline-consistency-classifier - headline-article-consistency-classifier model-index: - name: it5-small-ilgiornale-to-repubblica results: - task: type: headline-style-transfer-ilgiornale-to-repubblica name: "Headline style transfer (Il Giornale to Repubblica)" dataset: type: gsarti/change_it name: "CHANGE-IT" metrics: - type: rouge1 value: 0.270 name: "Test Rouge1" - type: rouge2 value: 0.092 name: "Test Rouge2" - type: rougeL value: 0.239 name: "Test RougeL" - type: bertscore value: 0.404 name: "Test BERTScore" args: - model_type: "dbmdz/bert-base-italian-xxl-uncased" - lang: "it" - num_layers: 10 - rescale_with_baseline: True - baseline_path: "bertscore_baseline_ita.tsv" - type: headline-headline-consistency-classifier value: 0.909 name: "Test Headline-Headline Consistency Accuracy" - type: headline-article-consistency-classifier value: 0.869 name: "Test Headline-Article Consistency Accuracy" co2_eq_emissions: emissions: "8g" source: "Google Cloud Platform Carbon Footprint" training_type: "fine-tuning" geographical_location: "Eemshaven, Netherlands, Europe" hardware_used: "1 TPU v3-8 VM" thumbnail: https://gsarti.com/publication/it5/featured.png --- # IT5 Small for News Headline Style Transfer (Il Giornale to Repubblica) 🗞️➡️🗞️ 🇮🇹 This repository contains the checkpoint for the [IT5 Small](https://huggingface.co/gsarti/it5-small) model fine-tuned on news headline style transfer in the Il Giornale to Repubblica direction on the Italian CHANGE-IT dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model The model is trained to generate an headline in the style of Repubblica from the full body of an article written in the style of Il Giornale. Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines g2r = pipeline("text2text-generation", model='it5/it5-small-ilgiornale-to-repubblica') g2r("Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\".") >>> [{"generated_text": "il nazionalista rajoy: 'voteremo la sfiducia'"}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/it5-small-ilgiornale-to-repubblica") model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-small-ilgiornale-to-repubblica") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ```
gsarti/mt5-base-ilgiornale-to-repubblica
gsarti
2022-03-09T08:02:59Z
4
0
transformers
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "mt5", "text2text-generation", "italian", "sequence-to-sequence", "newspaper", "ilgiornale", "repubblica", "style-transfer", "it", "dataset:gsarti/change_it", "arxiv:2203.03759", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - it license: apache-2.0 datasets: - gsarti/change_it tags: - italian - sequence-to-sequence - newspaper - ilgiornale - repubblica - style-transfer widget: - text: "WASHINGTON - La Corea del Nord torna dopo nove anni nella blacklist Usa degli Stati considerati sponsor del terrorismo. Come Iran, Siria e Sudan. Lo ha deciso Donald Trump , che ha preferito dare l'annuncio non durante il suo recente viaggio in Asia ma ieri, in una riunione del governo alla Casa Bianca. 'Oggi gli Stati Uniti designeranno la Corea del nord come uno stato sponsor del terrorismo', ha tuonato il tycoon, anticipando che sarà formalizzata oggi dal dipartimento di stato e sarà accompagnata da nuove e più severe sanzioni. 'Il livello più alto' mai imposto a Pyongyang, ha promesso. 'Avrebbe dovuto succedere molto tempo fa', ha aggiunto, scaricando per l'ennesima volta la responsabilità dell'attuale crisi sull'amministrazione Obama. Poi si è scagliato contro un 'regime assassino' che 'deve mettere fine allo sviluppo del suo programma illegale nucleare e balistico'. Per giustificare la svolta, Trump ha accusato Pyongyang non solo di 'minacciare il mondo con una devastazione nucleare' ma anche di aver 'ripetutamente sostenuto atti di terrorismo internazionale', compreso omicidi in suolo straniero. Il riferimento è all' uccisione all'aeroporto della capitale malese di Kim Jong Nam , il fratellastro del leader nordcoreano Kim Jong Un , ma non ci sono altri episodi noti. Tanto che alcuni esperti, come pure dirigenti Usa coperti dall'anonimato, dubitano che Pyongyang risponda ai criteri per una tale designazione. La mossa appare altamente simbolica, dato che la Corea del Nord è già pesantemente sanzionata a livello internazionale. Per il segretario di stato Rex Tillerson è solo l'ultima di una serie di passi per rafforzare la pressione su Pyongyang e costringerla a sedersi ad un tavolo perché gli Usa hanno sempre 'speranza nella diplomazia'. Ma nello stesso tempo è un monito per 'fermare e dissuadere' altri Paesi dal sostenere la Corea del Nord, finita nella blacklist 'anche per l'uso di armi chimiche'. Ma la mossa potrebbe anche essere controproducente, provocando una risposta di Kim o minando gli sforzi per sollecitare Pechino ad una maggiore pressione su Pyongyang. In ogni caso non aiuta il dialogo diretto tra Usa e Corea del Nord, che sembrava essere stato avviato in modo riservato. Come non aiutano gli scambi di insulti fra Trump e Kim. Nord Corea, Trump: 'Cerco di essere amico di Kim, sarebbe una bella cosa per il mondo'. Pyongyang era stata messa nella lista Usa degli Stati sponsor del terrorismo per aver fatto esplodere nel 1987 un volo della Korean Air uccidendo tutti i 115 passeggeri a bordo. Ma l'amministrazione di George W. Bush l'aveva rimossa sperando di far avanzare i negoziati sulla denuclearizzazione della penisola coreana. Il governo giapponese sostiene la decisione degli Stati Uniti di inserire la Corea del Nord nella lista degli stati che sponsorizzano il terrorismo, pur riconoscendo che l'annuncio potrebbe provocare una reazione immediata del regime di Pyongyang. Il premier Shinzo Abe ha accolto con consenso il comunicato Usa e ha detto alla stampa che servirà a incrementare la pressione sulla Corea del Nord. Il ministro della Difesa Itsunori Onodera , pur valutando positivamente la notifica, ha spiegato che si attendono azioni provocatorie dallo stato eremita, ribadendo che è vitale rimanere vigili. Secondo la stampa nipponica Abe aveva richiesto al dipartimento di Stato Usa di mettere la Corea del Nord sulla lista durante l'incontro col presidente Usa Donald Trump a Tokyo a inizio mese. L'ultimo lancio di missile balistico condotto da Pyongyang nell'oceano Pacifico, sorvolando il mare del Giappone, risale allo scorso settembre." - text: "ROMA - Una nuova droga killer è stata sequestrata per la prima volta in Europa dagli investigatori del Nas. Si tratta di una nuova \"miscela psicoattiva altamente tossica\" per la prima volta individuata da forze di polizia, simile all'eroina sintetica, ma molto più economica e letale. Tanto che i 20 grammi scoperti sarebbero stati sufficienti per fabbricare ben 20.000 dosi e lo stesso contatto attraverso la pelle può provocare intossicazione. Individuata per la prima volta, la nuova droga presenta una struttura simile al farmaco sedativo Fentanyl ma con effetti molto più devastanti per l'organismo. Proveniva dell'estero ed era contenuta in un plico postale indirizzato in una città del centro Italia: è stata intercettata tramite accertamenti sul web grazie a un'operazione di intelligence che ha visto come protagonisti i militari della Sezione operativa centrale del Comando carabinieri per la Tutela della salute (Nas). Economica e letale, secondo gli investigatori \"in confronto l'eroina è quasi 'acqua fresca', anzi, proprio per la sua economicità, in alcuni casi viene venduta dai pusher a giovani conviti di comprare eroina\". La diffusione di nuove droghe sintetiche che continuamente appaiono sui mercati necessita di un'attività investigativa costante e complessa. Si tratta infatti di sostanze dalla struttura molecolare molto simile a quella del Fentanyl ma ogni volta leggermente diversa. Di qui la difficoltà di individuarle e l'importanza del nuovo sequestro. \"La chiamano impropriamente 'eroina sintetica' - spiega il comandante dei Nas, generale Adelmo Lusi - per il tipo di effetto psicotropo simile, ma dal punto di vista della tossicità è molto peggio: con 25 milligrammi di eroina ci si sballa, con 25mg di simil-fentanyl, come quello appena sequestrato, si muore\". Le indagini sono partite da ricoveri per overdose in ospedale, in cui arrivavano ragazzi che non rispondevano al trattamento disintossicante per l'eroina. La nuova sostanza verrà ora segnalata per l'inserimento tra le tabelle ministeriali degli stupefacenti prevista dal Dpr 309/1990." - text: "Fragile come il burro. Il nostro territorio è precario. Ne sanno qualcosa i comuni che sono stati investititi dal maltempo . Il dissesto idrogeologico imperversa su tutto il territorio. Infatti, oltre 6.600 comuni , pari all’82% del totale, sono in aree ad elevato rischio idrogeologico, pari al 10% della sua superficie. La popolazione potenzialmente esposta è stimata in 5,8 milioni di persone. I dati emergono dalle recenti analisi fatte da Legambiente e Protezione civile, che mettono in evidenza come in 10 anni in Italia sia raddoppiata l’area dei territori colpiti da alluvioni e frane , passando da una media di quattro regioni all’anno a otto regioni. Nella classifica delle regioni a maggior rischio idrogeologico prima è la Calabria con il 100% dei comuni esposti; al 100% ci sono anche la provincia di Trento, il Molise, la Basilicata, l’Umbria, la Valle d’Aosta. Poi Marche, Liguria al 99%; Lazio, Toscana al 98%; Abruzzo (96%), Emilia-Romagna (95%), Campania e Friuli Venezia Giulia al 92%, Piemonte (87%), Sardegna (81%), Puglia (78%), Sicilia (71%), Lombardia (60%), provincia di Bolzano (59%), Veneto (56%). Tra le cause che condizionano ed amplificano il rischio idrogeologico c’è l’azione dell’uomo (abbandono e degrado, cementificazione, consumo di suolo, abusivismo, disboscamento e incendi). Ma anche e soprattutto la mancanza di una seria manutenzione ordinaria e non ad una organica politica di prevenzione." - text: "Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\"." metrics: - rouge - bertscore - headline-headline-consistency-classifier - headline-article-consistency-classifier model-index: - name: mt5-base-ilgiornale-to-repubblica results: - task: type: headline-style-transfer-ilgiornale-to-repubblica name: "Headline style transfer (Il Giornale to Repubblica)" dataset: type: gsarti/change_it name: "CHANGE-IT" metrics: - type: rouge1 value: 0.282 name: "Test Rouge1" - type: rouge2 value: 0.101 name: "Test Rouge2" - type: rougeL value: 0.248 name: "Test RougeL" - type: bertscore value: 0.411 name: "Test BERTScore" args: - model_type: "dbmdz/bert-base-italian-xxl-uncased" - lang: "it" - num_layers: 10 - rescale_with_baseline: True - baseline_path: "bertscore_baseline_ita.tsv" - type: headline-headline-consistency-classifier value: 0.815 name: "Test Headline-Headline Consistency Accuracy" - type: headline-article-consistency-classifier value: 0.773 name: "Test Headline-Article Consistency Accuracy" co2_eq_emissions: emissions: "40g" source: "Google Cloud Platform Carbon Footprint" training_type: "fine-tuning" geographical_location: "Eemshaven, Netherlands, Europe" hardware_used: "1 TPU v3-8 VM" thumbnail: https://gsarti.com/publication/it5/featured.png --- # mT5 Base for News Headline Style Transfer (Il Giornale to Repubblica) 🗞️➡️🗞️ 🇮🇹 This repository contains the checkpoint for the [mT5 Base](https://huggingface.co/google/mt5-base) model fine-tuned on news headline style transfer in the Il Giornale to Repubblica direction on the Italian CHANGE-IT dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model The model is trained to generate an headline in the style of Repubblica from the full body of an article written in the style of Il Giornale. Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines g2r = pipeline("text2text-generation", model='it5/mt5-base-ilgiornale-to-repubblica') g2r("Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\".") >>> [{"generated_text": "il nazionalista rajoy: 'voteremo la sfiducia'"}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/mt5-base-ilgiornale-to-repubblica") model = AutoModelForSeq2SeqLM.from_pretrained("it5/mt5-base-ilgiornale-to-repubblica") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ```
gsarti/it5-small-repubblica-to-ilgiornale
gsarti
2022-03-09T08:02:27Z
5
0
transformers
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "italian", "sequence-to-sequence", "newspaper", "ilgiornale", "repubblica", "style-transfer", "it", "dataset:gsarti/change_it", "arxiv:2203.03759", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - it license: apache-2.0 datasets: - gsarti/change_it tags: - italian - sequence-to-sequence - newspaper - ilgiornale - repubblica - style-transfer widget: - text: "WASHINGTON - La Corea del Nord torna dopo nove anni nella blacklist Usa degli Stati considerati sponsor del terrorismo. Come Iran, Siria e Sudan. Lo ha deciso Donald Trump , che ha preferito dare l'annuncio non durante il suo recente viaggio in Asia ma ieri, in una riunione del governo alla Casa Bianca. 'Oggi gli Stati Uniti designeranno la Corea del nord come uno stato sponsor del terrorismo', ha tuonato il tycoon, anticipando che sarà formalizzata oggi dal dipartimento di stato e sarà accompagnata da nuove e più severe sanzioni. 'Il livello più alto' mai imposto a Pyongyang, ha promesso. 'Avrebbe dovuto succedere molto tempo fa', ha aggiunto, scaricando per l'ennesima volta la responsabilità dell'attuale crisi sull'amministrazione Obama. Poi si è scagliato contro un 'regime assassino' che 'deve mettere fine allo sviluppo del suo programma illegale nucleare e balistico'. Per giustificare la svolta, Trump ha accusato Pyongyang non solo di 'minacciare il mondo con una devastazione nucleare' ma anche di aver 'ripetutamente sostenuto atti di terrorismo internazionale', compreso omicidi in suolo straniero. Il riferimento è all' uccisione all'aeroporto della capitale malese di Kim Jong Nam , il fratellastro del leader nordcoreano Kim Jong Un , ma non ci sono altri episodi noti. Tanto che alcuni esperti, come pure dirigenti Usa coperti dall'anonimato, dubitano che Pyongyang risponda ai criteri per una tale designazione. La mossa appare altamente simbolica, dato che la Corea del Nord è già pesantemente sanzionata a livello internazionale. Per il segretario di stato Rex Tillerson è solo l'ultima di una serie di passi per rafforzare la pressione su Pyongyang e costringerla a sedersi ad un tavolo perché gli Usa hanno sempre 'speranza nella diplomazia'. Ma nello stesso tempo è un monito per 'fermare e dissuadere' altri Paesi dal sostenere la Corea del Nord, finita nella blacklist 'anche per l'uso di armi chimiche'. Ma la mossa potrebbe anche essere controproducente, provocando una risposta di Kim o minando gli sforzi per sollecitare Pechino ad una maggiore pressione su Pyongyang. In ogni caso non aiuta il dialogo diretto tra Usa e Corea del Nord, che sembrava essere stato avviato in modo riservato. Come non aiutano gli scambi di insulti fra Trump e Kim. Nord Corea, Trump: 'Cerco di essere amico di Kim, sarebbe una bella cosa per il mondo'. Pyongyang era stata messa nella lista Usa degli Stati sponsor del terrorismo per aver fatto esplodere nel 1987 un volo della Korean Air uccidendo tutti i 115 passeggeri a bordo. Ma l'amministrazione di George W. Bush l'aveva rimossa sperando di far avanzare i negoziati sulla denuclearizzazione della penisola coreana. Il governo giapponese sostiene la decisione degli Stati Uniti di inserire la Corea del Nord nella lista degli stati che sponsorizzano il terrorismo, pur riconoscendo che l'annuncio potrebbe provocare una reazione immediata del regime di Pyongyang. Il premier Shinzo Abe ha accolto con consenso il comunicato Usa e ha detto alla stampa che servirà a incrementare la pressione sulla Corea del Nord. Il ministro della Difesa Itsunori Onodera , pur valutando positivamente la notifica, ha spiegato che si attendono azioni provocatorie dallo stato eremita, ribadendo che è vitale rimanere vigili. Secondo la stampa nipponica Abe aveva richiesto al dipartimento di Stato Usa di mettere la Corea del Nord sulla lista durante l'incontro col presidente Usa Donald Trump a Tokyo a inizio mese. L'ultimo lancio di missile balistico condotto da Pyongyang nell'oceano Pacifico, sorvolando il mare del Giappone, risale allo scorso settembre." - text: "ROMA - Una nuova droga killer è stata sequestrata per la prima volta in Europa dagli investigatori del Nas. Si tratta di una nuova \"miscela psicoattiva altamente tossica\" per la prima volta individuata da forze di polizia, simile all'eroina sintetica, ma molto più economica e letale. Tanto che i 20 grammi scoperti sarebbero stati sufficienti per fabbricare ben 20.000 dosi e lo stesso contatto attraverso la pelle può provocare intossicazione. Individuata per la prima volta, la nuova droga presenta una struttura simile al farmaco sedativo Fentanyl ma con effetti molto più devastanti per l'organismo. Proveniva dell'estero ed era contenuta in un plico postale indirizzato in una città del centro Italia: è stata intercettata tramite accertamenti sul web grazie a un'operazione di intelligence che ha visto come protagonisti i militari della Sezione operativa centrale del Comando carabinieri per la Tutela della salute (Nas). Economica e letale, secondo gli investigatori \"in confronto l'eroina è quasi 'acqua fresca', anzi, proprio per la sua economicità, in alcuni casi viene venduta dai pusher a giovani conviti di comprare eroina\". La diffusione di nuove droghe sintetiche che continuamente appaiono sui mercati necessita di un'attività investigativa costante e complessa. Si tratta infatti di sostanze dalla struttura molecolare molto simile a quella del Fentanyl ma ogni volta leggermente diversa. Di qui la difficoltà di individuarle e l'importanza del nuovo sequestro. \"La chiamano impropriamente 'eroina sintetica' - spiega il comandante dei Nas, generale Adelmo Lusi - per il tipo di effetto psicotropo simile, ma dal punto di vista della tossicità è molto peggio: con 25 milligrammi di eroina ci si sballa, con 25mg di simil-fentanyl, come quello appena sequestrato, si muore\". Le indagini sono partite da ricoveri per overdose in ospedale, in cui arrivavano ragazzi che non rispondevano al trattamento disintossicante per l'eroina. La nuova sostanza verrà ora segnalata per l'inserimento tra le tabelle ministeriali degli stupefacenti prevista dal Dpr 309/1990." - text: "Fragile come il burro. Il nostro territorio è precario. Ne sanno qualcosa i comuni che sono stati investititi dal maltempo . Il dissesto idrogeologico imperversa su tutto il territorio. Infatti, oltre 6.600 comuni , pari all’82% del totale, sono in aree ad elevato rischio idrogeologico, pari al 10% della sua superficie. La popolazione potenzialmente esposta è stimata in 5,8 milioni di persone. I dati emergono dalle recenti analisi fatte da Legambiente e Protezione civile, che mettono in evidenza come in 10 anni in Italia sia raddoppiata l’area dei territori colpiti da alluvioni e frane , passando da una media di quattro regioni all’anno a otto regioni. Nella classifica delle regioni a maggior rischio idrogeologico prima è la Calabria con il 100% dei comuni esposti; al 100% ci sono anche la provincia di Trento, il Molise, la Basilicata, l’Umbria, la Valle d’Aosta. Poi Marche, Liguria al 99%; Lazio, Toscana al 98%; Abruzzo (96%), Emilia-Romagna (95%), Campania e Friuli Venezia Giulia al 92%, Piemonte (87%), Sardegna (81%), Puglia (78%), Sicilia (71%), Lombardia (60%), provincia di Bolzano (59%), Veneto (56%). Tra le cause che condizionano ed amplificano il rischio idrogeologico c’è l’azione dell’uomo (abbandono e degrado, cementificazione, consumo di suolo, abusivismo, disboscamento e incendi). Ma anche e soprattutto la mancanza di una seria manutenzione ordinaria e non ad una organica politica di prevenzione." - text: "Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\"." metrics: - rouge - bertscore - headline-headline-consistency-classifier - headline-article-consistency-classifier model-index: - name: it5-small-repubblica-to-ilgiornale results: - task: type: headline-style-transfer-repubblica-to-ilgiornale name: "Headline style transfer (Repubblica to Il Giornale)" dataset: type: gsarti/change_it name: "CHANGE-IT" metrics: - type: rouge1 value: 0.255 name: "Test Rouge1" - type: rouge2 value: 0.080 name: "Test Rouge2" - type: rougeL value: 0.223 name: "Test RougeL" - type: bertscore value: 0.380 name: "Test BERTScore" args: - model_type: "dbmdz/bert-base-italian-xxl-uncased" - lang: "it" - num_layers: 10 - rescale_with_baseline: True - baseline_path: "bertscore_baseline_ita.tsv" - type: headline-headline-consistency-classifier value: 0.887 name: "Test Headline-Headline Consistency Accuracy" - type: headline-article-consistency-classifier value: 0.894 name: "Test Headline-Article Consistency Accuracy" co2_eq_emissions: emissions: "8g" source: "Google Cloud Platform Carbon Footprint" training_type: "fine-tuning" geographical_location: "Eemshaven, Netherlands, Europe" hardware_used: "1 TPU v3-8 VM" thumbnail: https://gsarti.com/publication/it5/featured.png --- # IT5 Small for News Headline Style Transfer (Repubblica to Il Giornale) 🗞️➡️🗞️ 🇮🇹 This repository contains the checkpoint for the [IT5 Small](https://huggingface.co/gsarti/it5-small) model fine-tuned on news headline style transfer in the Repubblica to Il Giornale direction on the Italian CHANGE-IT dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model The model is trained to generate an headline in the style of Il Giornale from the full body of an article written in the style of Repubblica. Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines r2g = pipeline("text2text-generation", model='it5/it5-small-repubblica-to-ilgiornale') r2g("Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\".") >>> [{"generated_text": "il nazionalista rajoy: 'voteremo la sfiducia'"}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/it5-small-repubblica-to-ilgiornale") model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-small-repubblica-to-ilgiornale") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ```
gsarti/it5-small-headline-generation
gsarti
2022-03-09T08:00:22Z
17
1
transformers
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "italian", "sequence-to-sequence", "newspaper", "ilgiornale", "repubblica", "headline-generation", "it", "dataset:gsarti/change_it", "arxiv:2203.03759", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - it license: apache-2.0 datasets: - gsarti/change_it tags: - italian - sequence-to-sequence - newspaper - ilgiornale - repubblica - headline-generation widget: - text: "WASHINGTON - La Corea del Nord torna dopo nove anni nella blacklist Usa degli Stati considerati sponsor del terrorismo. Come Iran, Siria e Sudan. Lo ha deciso Donald Trump , che ha preferito dare l'annuncio non durante il suo recente viaggio in Asia ma ieri, in una riunione del governo alla Casa Bianca. 'Oggi gli Stati Uniti designeranno la Corea del nord come uno stato sponsor del terrorismo', ha tuonato il tycoon, anticipando che sarà formalizzata oggi dal dipartimento di stato e sarà accompagnata da nuove e più severe sanzioni. 'Il livello più alto' mai imposto a Pyongyang, ha promesso. 'Avrebbe dovuto succedere molto tempo fa', ha aggiunto, scaricando per l'ennesima volta la responsabilità dell'attuale crisi sull'amministrazione Obama. Poi si è scagliato contro un 'regime assassino' che 'deve mettere fine allo sviluppo del suo programma illegale nucleare e balistico'. Per giustificare la svolta, Trump ha accusato Pyongyang non solo di 'minacciare il mondo con una devastazione nucleare' ma anche di aver 'ripetutamente sostenuto atti di terrorismo internazionale', compreso omicidi in suolo straniero. Il riferimento è all' uccisione all'aeroporto della capitale malese di Kim Jong Nam , il fratellastro del leader nordcoreano Kim Jong Un , ma non ci sono altri episodi noti. Tanto che alcuni esperti, come pure dirigenti Usa coperti dall'anonimato, dubitano che Pyongyang risponda ai criteri per una tale designazione. La mossa appare altamente simbolica, dato che la Corea del Nord è già pesantemente sanzionata a livello internazionale. Per il segretario di stato Rex Tillerson è solo l'ultima di una serie di passi per rafforzare la pressione su Pyongyang e costringerla a sedersi ad un tavolo perché gli Usa hanno sempre 'speranza nella diplomazia'. Ma nello stesso tempo è un monito per 'fermare e dissuadere' altri Paesi dal sostenere la Corea del Nord, finita nella blacklist 'anche per l'uso di armi chimiche'. Ma la mossa potrebbe anche essere controproducente, provocando una risposta di Kim o minando gli sforzi per sollecitare Pechino ad una maggiore pressione su Pyongyang. In ogni caso non aiuta il dialogo diretto tra Usa e Corea del Nord, che sembrava essere stato avviato in modo riservato. Come non aiutano gli scambi di insulti fra Trump e Kim. Nord Corea, Trump: 'Cerco di essere amico di Kim, sarebbe una bella cosa per il mondo'. Pyongyang era stata messa nella lista Usa degli Stati sponsor del terrorismo per aver fatto esplodere nel 1987 un volo della Korean Air uccidendo tutti i 115 passeggeri a bordo. Ma l'amministrazione di George W. Bush l'aveva rimossa sperando di far avanzare i negoziati sulla denuclearizzazione della penisola coreana. Il governo giapponese sostiene la decisione degli Stati Uniti di inserire la Corea del Nord nella lista degli stati che sponsorizzano il terrorismo, pur riconoscendo che l'annuncio potrebbe provocare una reazione immediata del regime di Pyongyang. Il premier Shinzo Abe ha accolto con consenso il comunicato Usa e ha detto alla stampa che servirà a incrementare la pressione sulla Corea del Nord. Il ministro della Difesa Itsunori Onodera , pur valutando positivamente la notifica, ha spiegato che si attendono azioni provocatorie dallo stato eremita, ribadendo che è vitale rimanere vigili. Secondo la stampa nipponica Abe aveva richiesto al dipartimento di Stato Usa di mettere la Corea del Nord sulla lista durante l'incontro col presidente Usa Donald Trump a Tokyo a inizio mese. L'ultimo lancio di missile balistico condotto da Pyongyang nell'oceano Pacifico, sorvolando il mare del Giappone, risale allo scorso settembre." - text: "ROMA - Una nuova droga killer è stata sequestrata per la prima volta in Europa dagli investigatori del Nas. Si tratta di una nuova \"miscela psicoattiva altamente tossica\" per la prima volta individuata da forze di polizia, simile all'eroina sintetica, ma molto più economica e letale. Tanto che i 20 grammi scoperti sarebbero stati sufficienti per fabbricare ben 20.000 dosi e lo stesso contatto attraverso la pelle può provocare intossicazione. Individuata per la prima volta, la nuova droga presenta una struttura simile al farmaco sedativo Fentanyl ma con effetti molto più devastanti per l'organismo. Proveniva dell'estero ed era contenuta in un plico postale indirizzato in una città del centro Italia: è stata intercettata tramite accertamenti sul web grazie a un'operazione di intelligence che ha visto come protagonisti i militari della Sezione operativa centrale del Comando carabinieri per la Tutela della salute (Nas). Economica e letale, secondo gli investigatori \"in confronto l'eroina è quasi 'acqua fresca', anzi, proprio per la sua economicità, in alcuni casi viene venduta dai pusher a giovani conviti di comprare eroina\". La diffusione di nuove droghe sintetiche che continuamente appaiono sui mercati necessita di un'attività investigativa costante e complessa. Si tratta infatti di sostanze dalla struttura molecolare molto simile a quella del Fentanyl ma ogni volta leggermente diversa. Di qui la difficoltà di individuarle e l'importanza del nuovo sequestro. \"La chiamano impropriamente 'eroina sintetica' - spiega il comandante dei Nas, generale Adelmo Lusi - per il tipo di effetto psicotropo simile, ma dal punto di vista della tossicità è molto peggio: con 25 milligrammi di eroina ci si sballa, con 25mg di simil-fentanyl, come quello appena sequestrato, si muore\". Le indagini sono partite da ricoveri per overdose in ospedale, in cui arrivavano ragazzi che non rispondevano al trattamento disintossicante per l'eroina. La nuova sostanza verrà ora segnalata per l'inserimento tra le tabelle ministeriali degli stupefacenti prevista dal Dpr 309/1990." - text: "Fragile come il burro. Il nostro territorio è precario. Ne sanno qualcosa i comuni che sono stati investititi dal maltempo . Il dissesto idrogeologico imperversa su tutto il territorio. Infatti, oltre 6.600 comuni , pari all’82% del totale, sono in aree ad elevato rischio idrogeologico, pari al 10% della sua superficie. La popolazione potenzialmente esposta è stimata in 5,8 milioni di persone. I dati emergono dalle recenti analisi fatte da Legambiente e Protezione civile, che mettono in evidenza come in 10 anni in Italia sia raddoppiata l’area dei territori colpiti da alluvioni e frane , passando da una media di quattro regioni all’anno a otto regioni. Nella classifica delle regioni a maggior rischio idrogeologico prima è la Calabria con il 100% dei comuni esposti; al 100% ci sono anche la provincia di Trento, il Molise, la Basilicata, l’Umbria, la Valle d’Aosta. Poi Marche, Liguria al 99%; Lazio, Toscana al 98%; Abruzzo (96%), Emilia-Romagna (95%), Campania e Friuli Venezia Giulia al 92%, Piemonte (87%), Sardegna (81%), Puglia (78%), Sicilia (71%), Lombardia (60%), provincia di Bolzano (59%), Veneto (56%). Tra le cause che condizionano ed amplificano il rischio idrogeologico c’è l’azione dell’uomo (abbandono e degrado, cementificazione, consumo di suolo, abusivismo, disboscamento e incendi). Ma anche e soprattutto la mancanza di una seria manutenzione ordinaria e non ad una organica politica di prevenzione." - text: "Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\"." metrics: - rouge - bertscore model-index: - name: it5-small-headline-generation results: - task: type: headline-generation name: "Headline generation" dataset: type: headgen_it name: "HeadGen-IT" metrics: - type: rouge1 value: 0.287 name: "Test Rouge1" - type: rouge2 value: 0.100 name: "Test Rouge2" - type: rougeL value: 0.253 name: "Test RougeL" - type: bertscore value: 0.414 name: "Test BERTScore" args: - model_type: "dbmdz/bert-base-italian-xxl-uncased" - lang: "it" - num_layers: 10 - rescale_with_baseline: True - baseline_path: "bertscore_baseline_ita.tsv" co2_eq_emissions: emissions: "8g" source: "Google Cloud Platform Carbon Footprint" training_type: "fine-tuning" geographical_location: "Eemshaven, Netherlands, Europe" hardware_used: "1 TPU v3-8 VM" thumbnail: https://gsarti.com/publication/it5/featured.png --- # IT5 Small for News Headline Generation 📣 🇮🇹 This repository contains the checkpoint for the [IT5 Small](https://huggingface.co/gsarti/it5-small) model fine-tuned on news headline generation on the Italian HeadGen-IT dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines hg = pipeline("text2text-generation", model='it5/it5-small-headline-generation') hg("Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\".") >>> [{"generated_text": "il nazionalista rajoy: 'voteremo la sfiducia'"}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/it5-small-headline-generation") model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-small-headline-generation") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ```