modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-02 18:52:31
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
533 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-02 18:52:05
card
stringlengths
11
1.01M
HaoHu/vit-base-patch16-224-in21k-classify-4scence
HaoHu
2022-07-24T16:02:55Z
48
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-07-24T15:23:48Z
--- license: other --- train this model on the Contest the original dataset is 链接: https://pan.baidu.com/s/1pr094NZ2QMj3nLy12gfa6g 密码: kb7a
bigmorning/distilgpt_new3_0030
bigmorning
2022-07-24T15:59:39Z
3
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-07-24T15:54:12Z
--- tags: - generated_from_keras_callback model-index: - name: distilgpt_new3_0030 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt_new3_0030 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.5197 - Validation Loss: 2.4026 - Epoch: 29 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.5407 | 2.4254 | 0 | | 2.5399 | 2.4247 | 1 | | 2.5391 | 2.4238 | 2 | | 2.5383 | 2.4232 | 3 | | 2.5375 | 2.4210 | 4 | | 2.5368 | 2.4210 | 5 | | 2.5361 | 2.4197 | 6 | | 2.5353 | 2.4193 | 7 | | 2.5345 | 2.4191 | 8 | | 2.5339 | 2.4177 | 9 | | 2.5332 | 2.4188 | 10 | | 2.5324 | 2.4160 | 11 | | 2.5317 | 2.4164 | 12 | | 2.5309 | 2.4145 | 13 | | 2.5302 | 2.4153 | 14 | | 2.5295 | 2.4139 | 15 | | 2.5288 | 2.4134 | 16 | | 2.5282 | 2.4123 | 17 | | 2.5274 | 2.4116 | 18 | | 2.5267 | 2.4110 | 19 | | 2.5259 | 2.4106 | 20 | | 2.5251 | 2.4097 | 21 | | 2.5244 | 2.4074 | 22 | | 2.5238 | 2.4078 | 23 | | 2.5232 | 2.4072 | 24 | | 2.5223 | 2.4062 | 25 | | 2.5217 | 2.4054 | 26 | | 2.5211 | 2.4057 | 27 | | 2.5204 | 2.4044 | 28 | | 2.5197 | 2.4026 | 29 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.8.2 - Datasets 2.3.2 - Tokenizers 0.12.1
SummerChiam/rust_image_classification_1
SummerChiam
2022-07-24T14:47:06Z
48
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-07-24T14:46:56Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: rust_image_classification results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.903797447681427 --- # rust_image_classification Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### nonrust ![nonrust](images/nonrust.png) #### rust ![rust](images/rust.png)
bigmorning/distilgpt_new3_0025
bigmorning
2022-07-24T14:33:32Z
3
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-07-24T14:28:09Z
--- tags: - generated_from_keras_callback model-index: - name: distilgpt_new3_0025 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt_new3_0025 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.5232 - Validation Loss: 2.4072 - Epoch: 24 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.5407 | 2.4254 | 0 | | 2.5399 | 2.4247 | 1 | | 2.5391 | 2.4238 | 2 | | 2.5383 | 2.4232 | 3 | | 2.5375 | 2.4210 | 4 | | 2.5368 | 2.4210 | 5 | | 2.5361 | 2.4197 | 6 | | 2.5353 | 2.4193 | 7 | | 2.5345 | 2.4191 | 8 | | 2.5339 | 2.4177 | 9 | | 2.5332 | 2.4188 | 10 | | 2.5324 | 2.4160 | 11 | | 2.5317 | 2.4164 | 12 | | 2.5309 | 2.4145 | 13 | | 2.5302 | 2.4153 | 14 | | 2.5295 | 2.4139 | 15 | | 2.5288 | 2.4134 | 16 | | 2.5282 | 2.4123 | 17 | | 2.5274 | 2.4116 | 18 | | 2.5267 | 2.4110 | 19 | | 2.5259 | 2.4106 | 20 | | 2.5251 | 2.4097 | 21 | | 2.5244 | 2.4074 | 22 | | 2.5238 | 2.4078 | 23 | | 2.5232 | 2.4072 | 24 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.8.2 - Datasets 2.3.2 - Tokenizers 0.12.1
bigmorning/distilgpt_new3_0010
bigmorning
2022-07-24T10:13:12Z
3
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-07-24T10:07:46Z
--- tags: - generated_from_keras_callback model-index: - name: distilgpt_new3_0010 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt_new3_0010 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.5339 - Validation Loss: 2.4177 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.5407 | 2.4254 | 0 | | 2.5399 | 2.4247 | 1 | | 2.5391 | 2.4238 | 2 | | 2.5383 | 2.4232 | 3 | | 2.5375 | 2.4210 | 4 | | 2.5368 | 2.4210 | 5 | | 2.5361 | 2.4197 | 6 | | 2.5353 | 2.4193 | 7 | | 2.5345 | 2.4191 | 8 | | 2.5339 | 2.4177 | 9 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.8.2 - Datasets 2.3.2 - Tokenizers 0.12.1
onon214/transformer-NLP
onon214
2022-07-24T09:41:22Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-24T09:31:39Z
--- tags: - generated_from_trainer model-index: - name: transformer-NLP results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # transformer-NLP This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 8.4503 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 9.8223 | 1.0 | 21 | 9.4635 | | 9.4003 | 2.0 | 42 | 9.2399 | | 9.1754 | 3.0 | 63 | 9.0618 | | 8.9665 | 4.0 | 84 | 8.8478 | | 8.8297 | 5.0 | 105 | 8.7369 | | 8.6993 | 6.0 | 126 | 8.6474 | | 8.6372 | 7.0 | 147 | 8.5848 | | 8.5375 | 8.0 | 168 | 8.4988 | | 8.5175 | 9.0 | 189 | 8.4400 | | 8.4955 | 10.0 | 210 | 8.4503 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
bigmorning/distilgpt_new3_0005
bigmorning
2022-07-24T08:46:05Z
3
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-07-24T08:41:04Z
--- tags: - generated_from_keras_callback model-index: - name: distilgpt_new3_0005 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt_new3_0005 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.5375 - Validation Loss: 2.4210 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.5407 | 2.4254 | 0 | | 2.5399 | 2.4247 | 1 | | 2.5391 | 2.4238 | 2 | | 2.5383 | 2.4232 | 3 | | 2.5375 | 2.4210 | 4 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.8.2 - Datasets 2.3.2 - Tokenizers 0.12.1
dnouri/ventricular_short_axis_3label
dnouri
2022-07-24T08:44:02Z
0
0
null
[ "MONAI", "region:us" ]
null
2022-07-22T11:38:50Z
--- tags: - MONAI --- # 3 Label Ventricular Segmentation This network segments cardiac ventricle in 2D short axis MR images. The left ventricular pool is class 1, left ventricular myocardium class 2, and right ventricular pool class 3. Full cycle segmentation with this network is possible although much of the training data is composed of segmented end-diastole images. The input to the network is single 2D images thus segmenting whole time-dependent volumes consists of multiple inference operations. The network and training scheme are essentially identical to that described in: `Kerfoot E., Clough J., Oksuz I., Lee J., King A.P., Schnabel J.A. (2019) Left-Ventricle Quantification Using Residual U-Net. In: Pop M. et al. (eds) Statistical Atlases and Computational Models of the Heart. Atrial Segmentation and LV Quantification Challenges. STACOM 2018. Lecture Notes in Computer Science, vol 11395. Springer, Cham. https://doi.org/10.1007/978-3-030-12029-0_40` ## Data The dataset used to train this network unfortunately cannot be made public as it contains unreleased image data from King's College London. Existing public datasets such as the[Sunnybrook Cardiac Dataset](http://www.cardiacatlas.org/studies/sunnybrook-cardiac-data/) and [ACDC Challenge](https://www.creatis.insa-lyon.fr/Challenge/acdc/) set can be used to train a similar network. The `train.json` configuration assumes all data is stored in a single npz file with keys "images" and "segs" containing respectively the raw image data and their accompanying segmentations. The given network was training with stored volumes with shapes `(9095, 256, 256)` thus other data of differing spatial dimensions must be cropped to `(256, 256)` or zero-padded to that size. For the training data this was done as a preprocessing step but the original pixel values are otherwise unchanged from their original forms. ## Training The network is trained with this data in conjunction with a series of augmentations for regularisation and robustness. Many of the original images are smaller than the expected size of `(256, 256)` and so were zero-padded, the network can thus be expected to be robust against large amounts of empty space in the inputs. Rotation and zooming is also applied to force the network to learn different sizes and orientations of the heart in the field of view. Free-form deformation is applied to vary the shape of the heart and its surrounding tissues which mimics to a degree deformation like what would be observed through the cardiac cycle. This of course does not replicate the heart moving through plane during the cycle or represent other observed changes but does provide enough variation that full-cycle segmentation is generally acceptable. Smooth fields are used to vary contrast and intensity in localised regions to simulate some of the variation in image quality caused by acquisition artefacts. Guassian noise is also added to simulate poor quality acquisition. These together force the network to learn to deal with a wider variation of image quality and partially to account for the difference between scanner vendors. Training is invoked with the following command line: ```sh python -m monai.bundle run training --meta_file configs/metadata.json --config_file configs/train.json --logging_file configs/logging.conf --bundle_root . ``` The dataset file is assumed to be `allimages3label.npz` but can be changed by setting the `dataset_file` value to your own file. ## Inference An example notebook [visualise.ipynb](./visualise.ipynb) demonstrates using the network directly with input images. Inference of 3D volumes only can be accomplished with the `inference.json` configuration: ```sh python -m monai.bundle run evaluating --meta_file configs/metadata.json --config_file configs/inference.json --logging_file configs/logging.conf --dataset_dir dataset --output_dir ./output/ --bundle_root . ```
WasuratS/q-Taxi-v3
WasuratS
2022-07-24T06:44:46Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-07-24T06:28:01Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="WasuratS/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Hairyrice/H
Hairyrice
2022-07-24T06:34:38Z
0
0
null
[ "region:us" ]
null
2022-07-24T06:33:50Z
He was just trying out to be the first time
WasuratS/q-FrozenLake-v1-4x4-noSlippery
WasuratS
2022-07-24T06:22:28Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-07-24T06:22:21Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="WasuratS/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Sidhanttholenlp/distilbert-finetuned-imdb
Sidhanttholenlp
2022-07-24T05:39:01Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-24T05:04:10Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7304 | 1.0 | 110 | 2.5467 | | 2.6068 | 2.0 | 220 | 2.5176 | | 2.5769 | 3.0 | 330 | 2.4837 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
nishita/results
nishita
2022-07-24T01:28:03Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-07-23T15:21:06Z
--- license: mit tags: - generated_from_trainer metrics: - rouge model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [gagan3012/k2t](https://huggingface.co/gagan3012/k2t) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5481 - Rouge1: 65.0534 - Rouge2: 45.7092 - Rougel: 55.8222 - Rougelsum: 57.1866 - Gen Len: 17.8061 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 0.5049 | 1.0 | 1101 | 0.5527 | 65.0475 | 45.6298 | 55.8323 | 57.2102 | 17.7929 | | 0.4994 | 2.0 | 2202 | 0.5490 | 65.0567 | 45.7082 | 55.8808 | 57.2343 | 17.8005 | | 0.4969 | 3.0 | 3303 | 0.5481 | 65.0534 | 45.7092 | 55.8222 | 57.1866 | 17.8061 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
vamsibanda/sbert-onnx-gtr-t5-xl
vamsibanda
2022-07-24T00:50:30Z
4
2
sentence-transformers
[ "sentence-transformers", "onnx", "t5", "sentence-similarity", "feature-extraction", "transformers", "en", "arxiv:2112.07899", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
feature-extraction
2022-07-21T16:02:04Z
--- language: en license: apache-2.0 tags: - sentence-transformers - sentence-similarity - feature-extraction - transformers - onnx --- # This is the ONNX model of sentence-transformers/gtr-t5-xl [Large Dual Encoders Are Generalizable Retrievers](https://arxiv.org/abs/2112.07899). Currently, Hugging Face does not support downloading ONNX files with external format files. I have created a workaround using sbert and optimum together to generate embeddings. ``` pip install onnx pip install onnxruntime==1.10.0 pip install transformers>4.6.1 pip install sentencepiece pip install sentence-transformers pip install optimum pip install torch==1.9.0 ``` Then you can use the model like this: ```python import os from sentence_transformers.util import snapshot_download from transformers import AutoTokenizer from optimum.onnxruntime import ORTModelForFeatureExtraction from sentence_transformers.models import Transformer, Pooling, Dense import torch from transformers.modeling_outputs import BaseModelOutput import torch.nn.functional as F import shutil model_name = 'vamsibanda/sbert-onnx-gtr-t5-xl' cache_folder = './' model_path = os.path.join(cache_folder, model_name.replace("/", "_")) def generate_embedding(text): token = tokenizer(text, return_tensors='pt') embeddings = model(input_ids=token['input_ids'], attention_mask=token['attention_mask']) sbert_embeddings = mean_pooling(embeddings, token['attention_mask']) sbert_embeddings = dense_layer.forward({'sentence_embedding':sbert_embeddings}) sbert_embeddings = F.normalize(sbert_embeddings['sentence_embedding'], p=2, dim=1) return sbert_embeddings.tolist()[0] def download_onnx_model(model_name, cache_folder, model_path, force_download = False): if force_download and os.path.exists(model_path): shutil.rmtree(model_path) elif os.path.exists(model_path): return snapshot_download(model_name, cache_dir=cache_folder, library_name='sentence-transformers' ) return def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) def generate_embedding(text): token = tokenizer(text, return_tensors='pt') embedding = model(input_ids=token['input_ids'], attention_mask=token['attention_mask']) embedding = mean_pooling(embedding, token['attention_mask']) embedding = dense_layer.forward({'sentence_embedding':embedding}) embedding = F.normalize(embedding['sentence_embedding'], p=2, dim=1) return embedding.tolist()[0] _ = download_onnx_model(model_name, cache_folder, model_path) tokenizer = AutoTokenizer.from_pretrained(model_path) model = ORTModelForFeatureExtraction.from_pretrained(model_path, force_download=False) pooling_layer = Pooling.load(f"{model_path}/1_Pooling") dense_layer = Dense.load(f"{model_path}/2_Dense") generate_embedding('That is a happy person') ```
richardbaihe/a3t-vctk
richardbaihe
2022-07-23T23:00:45Z
0
0
null
[ "tensorboard", "license:apache-2.0", "region:us" ]
null
2022-06-27T01:01:01Z
--- license: apache-2.0 --- There are two folders now: - conformer: Conformer A3T trained with all VCTK training data. - unseen_conformer: Conformer A3T trained by excluding some speakers during the training.
sudo-s/modeversion1_m7_e4
sudo-s
2022-07-23T22:44:11Z
53
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-07-23T18:20:54Z
--- license: apache-2.0 tags: - image-classification - generated_from_trainer metrics: - accuracy model-index: - name: modeversion1_m7_e4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # modeversion1_m7_e4 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sudo-s/herbier_mesuem7 dataset. It achieves the following results on the evaluation set: - Loss: 0.0902 - Accuracy: 0.9731 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.073 | 0.06 | 100 | 3.9370 | 0.1768 | | 3.4186 | 0.12 | 200 | 3.2721 | 0.2590 | | 2.6745 | 0.18 | 300 | 2.6465 | 0.3856 | | 2.2806 | 0.23 | 400 | 2.2600 | 0.4523 | | 1.9275 | 0.29 | 500 | 1.9653 | 0.5109 | | 1.6958 | 0.35 | 600 | 1.6815 | 0.6078 | | 1.2797 | 0.41 | 700 | 1.4514 | 0.6419 | | 1.3772 | 0.47 | 800 | 1.3212 | 0.6762 | | 1.1765 | 0.53 | 900 | 1.1476 | 0.7028 | | 1.0152 | 0.59 | 1000 | 1.0357 | 0.7313 | | 0.7861 | 0.64 | 1100 | 1.0230 | 0.7184 | | 1.0262 | 0.7 | 1200 | 0.9469 | 0.7386 | | 0.8905 | 0.76 | 1300 | 0.8184 | 0.7756 | | 0.6919 | 0.82 | 1400 | 0.8083 | 0.7711 | | 0.7494 | 0.88 | 1500 | 0.7601 | 0.7825 | | 0.5078 | 0.94 | 1600 | 0.6884 | 0.8056 | | 0.7134 | 1.0 | 1700 | 0.6311 | 0.8160 | | 0.4328 | 1.06 | 1800 | 0.5740 | 0.8252 | | 0.4971 | 1.11 | 1900 | 0.5856 | 0.8290 | | 0.5207 | 1.17 | 2000 | 0.6219 | 0.8167 | | 0.4027 | 1.23 | 2100 | 0.5703 | 0.8266 | | 0.5605 | 1.29 | 2200 | 0.5217 | 0.8372 | | 0.2723 | 1.35 | 2300 | 0.4805 | 0.8565 | | 0.401 | 1.41 | 2400 | 0.4811 | 0.8490 | | 0.3419 | 1.47 | 2500 | 0.4619 | 0.8608 | | 0.301 | 1.52 | 2600 | 0.4318 | 0.8712 | | 0.2872 | 1.58 | 2700 | 0.4698 | 0.8573 | | 0.2451 | 1.64 | 2800 | 0.4210 | 0.8729 | | 0.2211 | 1.7 | 2900 | 0.3645 | 0.8851 | | 0.3145 | 1.76 | 3000 | 0.4139 | 0.8715 | | 0.2001 | 1.82 | 3100 | 0.3605 | 0.8864 | | 0.3095 | 1.88 | 3200 | 0.4274 | 0.8675 | | 0.1915 | 1.93 | 3300 | 0.2910 | 0.9101 | | 0.2465 | 1.99 | 3400 | 0.2726 | 0.9103 | | 0.1218 | 2.05 | 3500 | 0.2742 | 0.9129 | | 0.0752 | 2.11 | 3600 | 0.2572 | 0.9183 | | 0.1067 | 2.17 | 3700 | 0.2584 | 0.9203 | | 0.0838 | 2.23 | 3800 | 0.2458 | 0.9212 | | 0.1106 | 2.29 | 3900 | 0.2412 | 0.9237 | | 0.092 | 2.34 | 4000 | 0.2232 | 0.9277 | | 0.1056 | 2.4 | 4100 | 0.2817 | 0.9077 | | 0.0696 | 2.46 | 4200 | 0.2334 | 0.9285 | | 0.0444 | 2.52 | 4300 | 0.2142 | 0.9363 | | 0.1046 | 2.58 | 4400 | 0.2036 | 0.9352 | | 0.066 | 2.64 | 4500 | 0.2115 | 0.9365 | | 0.0649 | 2.7 | 4600 | 0.1730 | 0.9448 | | 0.0513 | 2.75 | 4700 | 0.2148 | 0.9339 | | 0.0917 | 2.81 | 4800 | 0.1810 | 0.9438 | | 0.0879 | 2.87 | 4900 | 0.1971 | 0.9388 | | 0.1052 | 2.93 | 5000 | 0.1602 | 0.9508 | | 0.0362 | 2.99 | 5100 | 0.1475 | 0.9556 | | 0.041 | 3.05 | 5200 | 0.1328 | 0.9585 | | 0.0156 | 3.11 | 5300 | 0.1389 | 0.9571 | | 0.0047 | 3.17 | 5400 | 0.1224 | 0.9638 | | 0.0174 | 3.22 | 5500 | 0.1193 | 0.9651 | | 0.0087 | 3.28 | 5600 | 0.1276 | 0.9622 | | 0.0084 | 3.34 | 5700 | 0.1134 | 0.9662 | | 0.0141 | 3.4 | 5800 | 0.1239 | 0.9631 | | 0.0291 | 3.46 | 5900 | 0.1199 | 0.9645 | | 0.0049 | 3.52 | 6000 | 0.1103 | 0.9679 | | 0.0055 | 3.58 | 6100 | 0.1120 | 0.9662 | | 0.0061 | 3.63 | 6200 | 0.1071 | 0.9668 | | 0.0054 | 3.69 | 6300 | 0.1032 | 0.9697 | | 0.0041 | 3.75 | 6400 | 0.0961 | 0.9711 | | 0.0018 | 3.81 | 6500 | 0.0930 | 0.9718 | | 0.0032 | 3.87 | 6600 | 0.0918 | 0.9730 | | 0.0048 | 3.93 | 6700 | 0.0906 | 0.9732 | | 0.002 | 3.99 | 6800 | 0.0902 | 0.9731 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0 - Datasets 2.3.2 - Tokenizers 0.12.1
Chris1/a2c-SpaceInvadersNoFrameskip-v4
Chris1
2022-07-23T22:23:15Z
1
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-07-23T22:22:54Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - metrics: - type: mean_reward value: 532.50 +/- 105.79 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 --- # **A2C** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **A2C** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m utils.load_from_hub --algo a2c --env SpaceInvadersNoFrameskip-v4 -orga Chris1 -f logs/ python enjoy.py --algo a2c --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo a2c --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m utils.push_to_hub --algo a2c --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Chris1 ``` ## Hyperparameters ```python OrderedDict([('ent_coef', 0.01), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('frame_stack', 4), ('n_envs', 16), ('n_timesteps', 10000000.0), ('policy', 'CnnPolicy'), ('policy_kwargs', 'dict(optimizer_class=RMSpropTFLike, ' 'optimizer_kwargs=dict(eps=1e-5))'), ('vf_coef', 0.25), ('normalize', False)]) ```
Chris1/qrdqn-SpaceInvadersNoFrameskip-v4
Chris1
2022-07-23T22:20:05Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-07-23T22:19:33Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: QRDQN results: - metrics: - type: mean_reward value: 1647.00 +/- 742.05 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 --- # **QRDQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **QRDQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m utils.load_from_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -orga Chris1 -f logs/ python enjoy.py --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m utils.push_to_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Chris1 ``` ## Hyperparameters ```python OrderedDict([('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_fraction', 0.025), ('frame_stack', 4), ('n_timesteps', 10000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('normalize', False)]) ```
huggingtweets/bicyclingmag-bike24net-planetcyclery
huggingtweets
2022-07-23T21:47:24Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-07-23T21:38:17Z
--- language: en thumbnail: http://www.huggingtweets.com/bicyclingmag-bike24net-planetcyclery/1658612826681/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/596705203358801920/mQ6ZGz9R_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/781477479332577280/OOud15hY_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/837440117505585152/kquV327z_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Bicycling Magazine & BIKE24 & Planet Cyclery</div> <div style="text-align: center; font-size: 14px;">@bicyclingmag-bike24net-planetcyclery</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Bicycling Magazine & BIKE24 & Planet Cyclery. | Data | Bicycling Magazine | BIKE24 | Planet Cyclery | | --- | --- | --- | --- | | Tweets downloaded | 3250 | 3200 | 1636 | | Retweets | 3 | 42 | 48 | | Short tweets | 31 | 231 | 22 | | Tweets kept | 3216 | 2927 | 1566 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/dpmz7fyw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bicyclingmag-bike24net-planetcyclery's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/15ynynm2) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/15ynynm2/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/bicyclingmag-bike24net-planetcyclery') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Kuro96/dqn-SpaceInvadersNoFrameskip-v4
Kuro96
2022-07-23T21:21:08Z
4
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-07-23T21:20:36Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - metrics: - type: mean_reward value: 547.00 +/- 194.62 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Kuro96 -f logs/ python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Kuro96 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
huggingtweets/vgdunkey-vgdunkeybot
huggingtweets
2022-07-23T21:18:37Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-07-23T08:20:40Z
--- language: en thumbnail: http://www.huggingtweets.com/vgdunkey-vgdunkeybot/1658611112335/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/676614171849453568/AZd1Bh-s_400x400.png&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/727879199931944961/vkkeC6d2_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">dunkey & dunkey bot</div> <div style="text-align: center; font-size: 14px;">@vgdunkey-vgdunkeybot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from dunkey & dunkey bot. | Data | dunkey | dunkey bot | | --- | --- | --- | | Tweets downloaded | 1282 | 3200 | | Retweets | 147 | 0 | | Short tweets | 327 | 526 | | Tweets kept | 808 | 2674 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/208r9p27/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vgdunkey-vgdunkeybot's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/m3it0jfs) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/m3it0jfs/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/vgdunkey-vgdunkeybot') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
osanseviero/hf_hub_example-023f3150-3eae-45a4-bd3c-7a95639e10e0
osanseviero
2022-07-23T21:11:54Z
0
0
sklearn
[ "sklearn", "region:us" ]
null
2022-07-23T21:11:49Z
--- library_name: sklearn --- # Model description This is a HistGradientBoostingClassifier model trained on breast cancer dataset. It's trained with Halving Grid Search Cross Validation, with parameter grids on max_leaf_nodes and max_depth. ## Intended uses & limitations This model is not ready to be used in production. ## Training Procedure ### Hyperparameters The model is trained with below hyperparameters. <details> <summary> Click to expand </summary> | Hyperparameters | Value | | :-- | :-- | | aggressive_elimination | False | | cv | 5 | | error_score | nan | | estimator__categorical_features | None | | estimator__early_stopping | auto | | estimator__l2_regularization | 0.0 | | estimator__learning_rate | 0.1 | | estimator__loss | log_loss | | estimator__max_bins | 255 | | estimator__max_depth | None | | estimator__max_iter | 100 | | estimator__max_leaf_nodes | 31 | | estimator__min_samples_leaf | 20 | | estimator__monotonic_cst | None | | estimator__n_iter_no_change | 10 | | estimator__random_state | None | | estimator__scoring | loss | | estimator__tol | 1e-07 | | estimator__validation_fraction | 0.1 | | estimator__verbose | 0 | | estimator__warm_start | False | | estimator | HistGradientBoostingClassifier() | | factor | 3 | | max_resources | auto | | min_resources | exhaust | | n_jobs | -1 | | param_grid | {'max_leaf_nodes': [5, 10, 15], 'max_depth': [2, 5, 10]} | | random_state | 42 | | refit | True | | resource | n_samples | | return_train_score | True | | scoring | None | | verbose | 0 | </details> ### Model Plot The model plot is below. <style>#sk-container-id-1 {color: black;background-color: white;}#sk-container-id-1 pre{padding: 0;}#sk-container-id-1 div.sk-toggleable {background-color: white;}#sk-container-id-1 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-1 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-1 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-1 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-1 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-1 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-1 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-1 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-1 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-1 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-1 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-1 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-1 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-1 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-1 div.sk-item {position: relative;z-index: 1;}#sk-container-id-1 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-1 div.sk-item::before, #sk-container-id-1 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-1 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-1 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-1 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-1 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-1 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-1 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-1 div.sk-label-container {text-align: center;}#sk-container-id-1 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-1 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-1" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>HalvingGridSearchCV(estimator=HistGradientBoostingClassifier(), n_jobs=-1,param_grid={&#x27;max_depth&#x27;: [2, 5, 10],&#x27;max_leaf_nodes&#x27;: [5, 10, 15]},random_state=42)</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-1" type="checkbox" ><label for="sk-estimator-id-1" class="sk-toggleable__label sk-toggleable__label-arrow">HalvingGridSearchCV</label><div class="sk-toggleable__content"><pre>HalvingGridSearchCV(estimator=HistGradientBoostingClassifier(), n_jobs=-1,param_grid={&#x27;max_depth&#x27;: [2, 5, 10],&#x27;max_leaf_nodes&#x27;: [5, 10, 15]},random_state=42)</pre></div></div></div><div class="sk-parallel"><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-2" type="checkbox" ><label for="sk-estimator-id-2" class="sk-toggleable__label sk-toggleable__label-arrow">estimator: HistGradientBoostingClassifier</label><div class="sk-toggleable__content"><pre>HistGradientBoostingClassifier()</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-3" type="checkbox" ><label for="sk-estimator-id-3" class="sk-toggleable__label sk-toggleable__label-arrow">HistGradientBoostingClassifier</label><div class="sk-toggleable__content"><pre>HistGradientBoostingClassifier()</pre></div></div></div></div></div></div></div></div></div></div> # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> ``` import pickle with open(dtc_pkl_filename, 'rb') as file: clf = pickle.load(file) ``` </details> # Model Card Authors This model card is written by following authors: skops_user # Model Card Contact You can contact the model card authors through following channels: [More Information Needed] # Citation Below you can find information related to citation. **BibTeX:** ``` [More Information Needed] ``` confusion_matrix ![confusion_matrix](confusion_matrix.png)
huggingtweets/vgdunkey-vgdunkeybot-videobotdunkey
huggingtweets
2022-07-23T21:11:28Z
8
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-07-23T21:10:35Z
--- language: en thumbnail: http://www.huggingtweets.com/vgdunkey-vgdunkeybot-videobotdunkey/1658610683659/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/676614171849453568/AZd1Bh-s_400x400.png&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/727879199931944961/vkkeC6d2_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/889145771760680960/F3g-pbn2_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">dunkey & dunkey bot & dunkey bot</div> <div style="text-align: center; font-size: 14px;">@vgdunkey-vgdunkeybot-videobotdunkey</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from dunkey & dunkey bot & dunkey bot. | Data | dunkey | dunkey bot | dunkey bot | | --- | --- | --- | --- | | Tweets downloaded | 1282 | 3200 | 911 | | Retweets | 147 | 0 | 1 | | Short tweets | 327 | 526 | 33 | | Tweets kept | 808 | 2674 | 877 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1gs4ik1d/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vgdunkey-vgdunkeybot-videobotdunkey's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/qqqwy9dp) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/qqqwy9dp/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/vgdunkey-vgdunkeybot-videobotdunkey') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
osanseviero/hf_hub_example-821defb0-1482-4d27-884d-4359bfad704f
osanseviero
2022-07-23T21:08:24Z
0
0
sklearn
[ "sklearn", "region:us" ]
null
2022-07-23T21:08:18Z
--- library_name: sklearn --- # Model description This is a HistGradientBoostingClassifier model trained on breast cancer dataset. It's trained with Halving Grid Search Cross Validation, with parameter grids on max_leaf_nodes and max_depth. ## Intended uses & limitations This model is not ready to be used in production. ## Training Procedure ### Hyperparameters The model is trained with below hyperparameters. <details> <summary> Click to expand </summary> | Hyperparameters | Value | | :-- | :-- | | aggressive_elimination | False | | cv | 5 | | error_score | nan | | estimator__categorical_features | None | | estimator__early_stopping | auto | | estimator__l2_regularization | 0.0 | | estimator__learning_rate | 0.1 | | estimator__loss | log_loss | | estimator__max_bins | 255 | | estimator__max_depth | None | | estimator__max_iter | 100 | | estimator__max_leaf_nodes | 31 | | estimator__min_samples_leaf | 20 | | estimator__monotonic_cst | None | | estimator__n_iter_no_change | 10 | | estimator__random_state | None | | estimator__scoring | loss | | estimator__tol | 1e-07 | | estimator__validation_fraction | 0.1 | | estimator__verbose | 0 | | estimator__warm_start | False | | estimator | HistGradientBoostingClassifier() | | factor | 3 | | max_resources | auto | | min_resources | exhaust | | n_jobs | -1 | | param_grid | {'max_leaf_nodes': [5, 10, 15], 'max_depth': [2, 5, 10]} | | random_state | 42 | | refit | True | | resource | n_samples | | return_train_score | True | | scoring | None | | verbose | 0 | </details> ### Model Plot The model plot is below. <style>#sk-container-id-1 {color: black;background-color: white;}#sk-container-id-1 pre{padding: 0;}#sk-container-id-1 div.sk-toggleable {background-color: white;}#sk-container-id-1 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-1 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-1 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-1 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-1 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-1 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-1 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-1 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-1 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-1 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-1 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-1 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-1 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-1 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-1 div.sk-item {position: relative;z-index: 1;}#sk-container-id-1 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-1 div.sk-item::before, #sk-container-id-1 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-1 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-1 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-1 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-1 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-1 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-1 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-1 div.sk-label-container {text-align: center;}#sk-container-id-1 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-1 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-1" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>HalvingGridSearchCV(estimator=HistGradientBoostingClassifier(), n_jobs=-1,param_grid={&#x27;max_depth&#x27;: [2, 5, 10],&#x27;max_leaf_nodes&#x27;: [5, 10, 15]},random_state=42)</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-1" type="checkbox" ><label for="sk-estimator-id-1" class="sk-toggleable__label sk-toggleable__label-arrow">HalvingGridSearchCV</label><div class="sk-toggleable__content"><pre>HalvingGridSearchCV(estimator=HistGradientBoostingClassifier(), n_jobs=-1,param_grid={&#x27;max_depth&#x27;: [2, 5, 10],&#x27;max_leaf_nodes&#x27;: [5, 10, 15]},random_state=42)</pre></div></div></div><div class="sk-parallel"><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-2" type="checkbox" ><label for="sk-estimator-id-2" class="sk-toggleable__label sk-toggleable__label-arrow">estimator: HistGradientBoostingClassifier</label><div class="sk-toggleable__content"><pre>HistGradientBoostingClassifier()</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-3" type="checkbox" ><label for="sk-estimator-id-3" class="sk-toggleable__label sk-toggleable__label-arrow">HistGradientBoostingClassifier</label><div class="sk-toggleable__content"><pre>HistGradientBoostingClassifier()</pre></div></div></div></div></div></div></div></div></div></div> # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> ``` import pickle with open(dtc_pkl_filename, 'rb') as file: clf = pickle.load(file) ``` </details> # Model Card Authors This model card is written by following authors: skops_user # Model Card Contact You can contact the model card authors through following channels: [More Information Needed] # Citation Below you can find information related to citation. **BibTeX:** ``` [More Information Needed] ```
osanseviero/hf_hub_example-f7d1d7e5-f207-4eef-99bb-57408d604e2b
osanseviero
2022-07-23T21:04:42Z
0
0
sklearn
[ "sklearn", "region:us" ]
null
2022-07-23T21:04:37Z
--- library_name: sklearn --- # Model description [More Information Needed] ## Intended uses & limitations [More Information Needed] ## Training Procedure ### Hyperparameters The model is trained with below hyperparameters. <details> <summary> Click to expand </summary> | Hyperparameters | Value | | :-- | :-- | | aggressive_elimination | False | | cv | 5 | | error_score | nan | | estimator__categorical_features | None | | estimator__early_stopping | auto | | estimator__l2_regularization | 0.0 | | estimator__learning_rate | 0.1 | | estimator__loss | log_loss | | estimator__max_bins | 255 | | estimator__max_depth | None | | estimator__max_iter | 100 | | estimator__max_leaf_nodes | 31 | | estimator__min_samples_leaf | 20 | | estimator__monotonic_cst | None | | estimator__n_iter_no_change | 10 | | estimator__random_state | None | | estimator__scoring | loss | | estimator__tol | 1e-07 | | estimator__validation_fraction | 0.1 | | estimator__verbose | 0 | | estimator__warm_start | False | | estimator | HistGradientBoostingClassifier() | | factor | 3 | | max_resources | auto | | min_resources | exhaust | | n_jobs | -1 | | param_grid | {'max_leaf_nodes': [5, 10, 15], 'max_depth': [2, 5, 10]} | | random_state | 42 | | refit | True | | resource | n_samples | | return_train_score | True | | scoring | None | | verbose | 0 | </details> ### Model Plot The model plot is below. <style>#sk-container-id-1 {color: black;background-color: white;}#sk-container-id-1 pre{padding: 0;}#sk-container-id-1 div.sk-toggleable {background-color: white;}#sk-container-id-1 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-1 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-1 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-1 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-1 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-1 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-1 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-1 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-1 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-1 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-1 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-1 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-1 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-1 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-1 div.sk-item {position: relative;z-index: 1;}#sk-container-id-1 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-1 div.sk-item::before, #sk-container-id-1 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-1 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-1 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-1 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-1 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-1 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-1 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-1 div.sk-label-container {text-align: center;}#sk-container-id-1 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-1 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-1" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>HalvingGridSearchCV(estimator=HistGradientBoostingClassifier(), n_jobs=-1,param_grid={&#x27;max_depth&#x27;: [2, 5, 10],&#x27;max_leaf_nodes&#x27;: [5, 10, 15]},random_state=42)</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-1" type="checkbox" ><label for="sk-estimator-id-1" class="sk-toggleable__label sk-toggleable__label-arrow">HalvingGridSearchCV</label><div class="sk-toggleable__content"><pre>HalvingGridSearchCV(estimator=HistGradientBoostingClassifier(), n_jobs=-1,param_grid={&#x27;max_depth&#x27;: [2, 5, 10],&#x27;max_leaf_nodes&#x27;: [5, 10, 15]},random_state=42)</pre></div></div></div><div class="sk-parallel"><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-2" type="checkbox" ><label for="sk-estimator-id-2" class="sk-toggleable__label sk-toggleable__label-arrow">estimator: HistGradientBoostingClassifier</label><div class="sk-toggleable__content"><pre>HistGradientBoostingClassifier()</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-3" type="checkbox" ><label for="sk-estimator-id-3" class="sk-toggleable__label sk-toggleable__label-arrow">HistGradientBoostingClassifier</label><div class="sk-toggleable__content"><pre>HistGradientBoostingClassifier()</pre></div></div></div></div></div></div></div></div></div></div> # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> ``` [More Information Needed] ``` </details> # Model Card Authors This model card is written by following authors: [More Information Needed] # Model Card Contact You can contact the model card authors through following channels: [More Information Needed] # Citation Below you can find information related to citation. **BibTeX:** ``` [More Information Needed] ```
jcashmoney123/autotrain-amz-1171143428
jcashmoney123
2022-07-23T18:31:20Z
3
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain", "unk", "dataset:jcashmoney123/autotrain-data-amz", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-07-23T18:27:51Z
--- tags: autotrain language: unk widget: - text: "I love AutoTrain 🤗" datasets: - jcashmoney123/autotrain-data-amz co2_eq_emissions: 5.4331208624177245 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 1171143428 - CO2 Emissions (in grams): 5.4331208624177245 ## Validation Metrics - Loss: 2.5859596729278564 - Rouge1: 19.3601 - Rouge2: 4.6055 - RougeL: 17.4309 - RougeLsum: 17.4621 - Gen Len: 15.2938 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/jcashmoney123/autotrain-amz-1171143428 ```
oMateos2020/t5-small_adafactor
oMateos2020
2022-07-23T18:20:11Z
12
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-07-20T11:32:51Z
--- tags: - generated_from_trainer datasets: - xsum metrics: - rouge model-index: - name: t5-small_adafactor results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: xsum type: xsum args: default metrics: - name: Rouge1 type: rouge value: 32.8631 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small_adafactor This model is a fine-tuned version of [oMateos2020/t5-small_adafactor](https://huggingface.co/oMateos2020/t5-small_adafactor) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.1167 - Rouge1: 32.8631 - Rouge2: 11.658 - Rougel: 26.6192 - Rougelsum: 26.6224 - Gen Len: 18.7663 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adafactor - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 2.1315 | 0.02 | 200 | 2.1865 | 31.9486 | 10.9605 | 25.7418 | 25.7408 | 18.8466 | | 2.1297 | 0.05 | 400 | 2.1965 | 31.9598 | 10.9463 | 25.784 | 25.7867 | 18.8525 | | 2.1284 | 0.07 | 600 | 2.1981 | 32.231 | 11.1003 | 26.0155 | 26.0226 | 18.8466 | | 2.1315 | 0.09 | 800 | 2.1873 | 31.9161 | 10.8642 | 25.7166 | 25.7273 | 18.8227 | | 2.1212 | 0.12 | 1000 | 2.1892 | 32.4646 | 11.1852 | 26.2451 | 26.2439 | 18.8259 | | 2.1028 | 0.14 | 1200 | 2.1978 | 32.2886 | 11.1346 | 26.0795 | 26.0827 | 18.7685 | | 2.1221 | 0.16 | 1400 | 2.1936 | 32.2901 | 11.0821 | 25.9983 | 26.0024 | 18.7798 | | 2.1168 | 0.19 | 1600 | 2.1922 | 32.1655 | 11.1451 | 25.986 | 25.9893 | 18.8232 | | 2.1166 | 0.21 | 1800 | 2.1836 | 32.2611 | 11.174 | 26.0594 | 26.0688 | 18.7633 | | 2.1053 | 0.24 | 2000 | 2.1929 | 32.3321 | 11.213 | 26.1859 | 26.1903 | 18.7758 | | 2.1126 | 0.26 | 2200 | 2.1811 | 32.2078 | 11.1792 | 26.0776 | 26.0817 | 18.8197 | | 2.1038 | 0.28 | 2400 | 2.1836 | 32.2799 | 11.2511 | 26.1191 | 26.1251 | 18.7884 | | 2.1181 | 0.31 | 2600 | 2.1805 | 32.1197 | 11.1586 | 26.0441 | 26.0441 | 18.8045 | | 2.1217 | 0.33 | 2800 | 2.1806 | 32.3051 | 11.2638 | 26.1319 | 26.1386 | 18.7886 | | 2.116 | 0.35 | 3000 | 2.1741 | 32.2799 | 11.1887 | 26.1224 | 26.1363 | 18.7769 | | 2.1118 | 0.38 | 3200 | 2.1767 | 32.387 | 11.2053 | 26.077 | 26.0845 | 18.8407 | | 2.1164 | 0.4 | 3400 | 2.1743 | 32.5008 | 11.4021 | 26.3291 | 26.3297 | 18.7731 | | 2.1068 | 0.42 | 3600 | 2.1673 | 32.2347 | 11.1676 | 26.0657 | 26.0662 | 18.817 | | 2.1276 | 0.45 | 3800 | 2.1664 | 32.2434 | 11.2862 | 26.094 | 26.0994 | 18.7713 | | 2.1313 | 0.47 | 4000 | 2.1636 | 32.694 | 11.3724 | 26.4071 | 26.4008 | 18.7709 | | 2.1229 | 0.49 | 4200 | 2.1633 | 32.456 | 11.4057 | 26.2733 | 26.2689 | 18.7586 | | 2.129 | 0.52 | 4400 | 2.1641 | 32.309 | 11.2133 | 26.1062 | 26.1121 | 18.7729 | | 2.1425 | 0.54 | 4600 | 2.1577 | 32.5879 | 11.4001 | 26.3045 | 26.3078 | 18.8104 | | 2.1536 | 0.56 | 4800 | 2.1507 | 32.5152 | 11.4035 | 26.3054 | 26.3116 | 18.7941 | | 2.148 | 0.59 | 5000 | 2.1503 | 32.8088 | 11.5641 | 26.5346 | 26.5311 | 18.7602 | | 2.1541 | 0.61 | 5200 | 2.1491 | 32.8185 | 11.5816 | 26.5261 | 26.527 | 18.7654 | | 2.155 | 0.64 | 5400 | 2.1466 | 32.7229 | 11.5339 | 26.4363 | 26.442 | 18.8404 | | 2.1579 | 0.66 | 5600 | 2.1435 | 32.884 | 11.6042 | 26.5862 | 26.5891 | 18.7713 | | 2.1601 | 0.68 | 5800 | 2.1393 | 32.8027 | 11.5328 | 26.4521 | 26.4567 | 18.7904 | | 2.1765 | 0.71 | 6000 | 2.1393 | 32.8059 | 11.5751 | 26.5499 | 26.5551 | 18.7768 | | 2.2176 | 0.73 | 6200 | 2.1345 | 33.0734 | 11.8056 | 26.7546 | 26.7607 | 18.7756 | | 2.2126 | 0.75 | 6400 | 2.1328 | 32.7478 | 11.5925 | 26.5333 | 26.5359 | 18.7819 | | 2.1916 | 0.78 | 6600 | 2.1298 | 32.658 | 11.491 | 26.379 | 26.3869 | 18.8101 | | 2.2162 | 0.8 | 6800 | 2.1297 | 32.7843 | 11.5629 | 26.4736 | 26.4728 | 18.8187 | | 2.2358 | 0.82 | 7000 | 2.1287 | 32.9181 | 11.6378 | 26.5966 | 26.5987 | 18.8039 | | 2.2371 | 0.85 | 7200 | 2.1265 | 32.8413 | 11.674 | 26.5905 | 26.5831 | 18.7962 | | 2.256 | 0.87 | 7400 | 2.1245 | 32.7412 | 11.5627 | 26.4976 | 26.503 | 18.7728 | | 2.2566 | 0.89 | 7600 | 2.1220 | 32.8165 | 11.6069 | 26.5301 | 26.5295 | 18.7871 | | 2.2954 | 0.92 | 7800 | 2.1197 | 32.7399 | 11.5417 | 26.4914 | 26.4938 | 18.7752 | | 2.2766 | 0.94 | 8000 | 2.1187 | 32.853 | 11.6411 | 26.5909 | 26.5938 | 18.7852 | | 2.3273 | 0.96 | 8200 | 2.1169 | 32.9376 | 11.709 | 26.6665 | 26.6672 | 18.7734 | | 2.3182 | 0.99 | 8400 | 2.1167 | 32.8631 | 11.658 | 26.6192 | 26.6224 | 18.7663 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Siyong/MC_RN
Siyong
2022-07-23T16:22:03Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-23T10:22:37Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: Millad_Customer_RN results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Millad_Customer_RN This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.5635 - Wer: 0.8113 - Cer: 0.4817 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 4000 - num_epochs: 600 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:------:|:-----:|:---------------:|:------:|:------:| | 1.9257 | 13.33 | 2000 | 2.0606 | 0.9767 | 0.5500 | | 1.4828 | 26.67 | 4000 | 2.1161 | 0.9019 | 0.4932 | | 1.2582 | 40.0 | 6000 | 2.0589 | 0.8504 | 0.4942 | | 0.9804 | 53.33 | 8000 | 2.4633 | 0.8745 | 0.4763 | | 0.7862 | 66.67 | 10000 | 2.4794 | 0.8861 | 0.4944 | | 0.6492 | 80.0 | 12000 | 2.8693 | 0.8554 | 0.4928 | | 0.5375 | 93.33 | 14000 | 2.6125 | 0.8296 | 0.4802 | | 0.4462 | 106.67 | 16000 | 2.7591 | 0.8770 | 0.4974 | | 0.3873 | 120.0 | 18000 | 3.0325 | 0.8379 | 0.4800 | | 0.3445 | 133.33 | 20000 | 2.9965 | 0.8761 | 0.4986 | | 0.3087 | 146.67 | 22000 | 3.3437 | 0.8221 | 0.4923 | | 0.2755 | 160.0 | 24000 | 3.3022 | 0.8803 | 0.5211 | | 0.2467 | 173.33 | 26000 | 3.2348 | 0.8479 | 0.4933 | | 0.2281 | 186.67 | 28000 | 3.8010 | 0.8695 | 0.5081 | | 0.2119 | 200.0 | 30000 | 3.0446 | 0.8545 | 0.4902 | | 0.194 | 213.33 | 32000 | 3.0873 | 0.8454 | 0.4840 | | 0.1677 | 226.67 | 34000 | 3.6184 | 0.8645 | 0.5019 | | 0.1642 | 240.0 | 36000 | 3.2480 | 0.8412 | 0.4903 | | 0.1656 | 253.33 | 38000 | 3.4379 | 0.8362 | 0.4816 | | 0.1371 | 266.67 | 40000 | 3.5117 | 0.8479 | 0.5040 | | 0.1301 | 280.0 | 42000 | 3.4360 | 0.8404 | 0.4870 | | 0.128 | 293.33 | 44000 | 3.6589 | 0.8537 | 0.4977 | | 0.1152 | 306.67 | 46000 | 4.2359 | 0.8545 | 0.5051 | | 0.1119 | 320.0 | 48000 | 3.5818 | 0.7980 | 0.4882 | | 0.1026 | 333.33 | 50000 | 3.7618 | 0.8013 | 0.4865 | | 0.0945 | 346.67 | 52000 | 4.2197 | 0.8404 | 0.5028 | | 0.0962 | 360.0 | 54000 | 3.9231 | 0.8653 | 0.5030 | | 0.088 | 373.33 | 56000 | 3.8400 | 0.8354 | 0.4914 | | 0.0743 | 386.67 | 58000 | 3.4924 | 0.8088 | 0.4824 | | 0.0811 | 400.0 | 60000 | 3.8370 | 0.8396 | 0.4861 | | 0.0696 | 413.33 | 62000 | 4.2808 | 0.8412 | 0.5065 | | 0.0692 | 426.67 | 64000 | 4.0161 | 0.8088 | 0.4744 | | 0.0622 | 440.0 | 66000 | 3.9080 | 0.8163 | 0.4910 | | 0.0591 | 453.33 | 68000 | 3.9838 | 0.8113 | 0.4823 | | 0.0527 | 466.67 | 70000 | 3.8067 | 0.8329 | 0.4914 | | 0.056 | 480.0 | 72000 | 4.1415 | 0.8096 | 0.4782 | | 0.0535 | 493.33 | 74000 | 4.3350 | 0.8229 | 0.4828 | | 0.0531 | 506.67 | 76000 | 3.9808 | 0.8071 | 0.4807 | | 0.0451 | 520.0 | 78000 | 4.0301 | 0.7988 | 0.4816 | | 0.044 | 533.33 | 80000 | 4.4680 | 0.8371 | 0.4921 | | 0.0389 | 546.67 | 82000 | 4.1380 | 0.8121 | 0.4819 | | 0.0392 | 560.0 | 84000 | 4.3910 | 0.7930 | 0.4763 | | 0.0389 | 573.33 | 86000 | 4.5086 | 0.8055 | 0.4802 | | 0.0355 | 586.67 | 88000 | 4.6259 | 0.8113 | 0.4821 | | 0.0307 | 600.0 | 90000 | 4.5635 | 0.8113 | 0.4817 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.12.0+cu113 - Datasets 1.18.3 - Tokenizers 0.12.1
srini98/a2c-AntBulletEnv-v0
srini98
2022-07-23T15:42:47Z
2
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-07-23T15:41:36Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - metrics: - type: mean_reward value: 1690.76 +/- 243.94 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
steven123/Check_Gum_Teeth
steven123
2022-07-23T14:50:43Z
51
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-07-23T14:50:33Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: Check_Gum_Teeth results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 1.0 --- # Check_Gum_Teeth Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### Bad_Gum ![Bad_Gum](images/Bad_Gum.jpg) #### Good_Gum ![Good_Gum](images/Good_Gum.jpg)
th1s1s1t/ppo-LunarLander-v2
th1s1s1t
2022-07-23T14:41:24Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-07-23T14:41:01Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 290.28 +/- 26.36 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-5_austria-5_s3
jonatasgrosman
2022-07-23T14:28:41Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-23T14:22:07Z
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2r_de_vp-100k_accent_germany-5_austria-5_s3 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
Siyong/M_RN
Siyong
2022-07-23T14:00:34Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-23T10:59:34Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: MilladRN results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MilladRN This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.4355 - Wer: 0.4907 - Cer: 0.2802 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 4000 - num_epochs: 750 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:------:|:-----:|:---------------:|:------:|:------:| | 3.3347 | 33.9 | 2000 | 2.2561 | 0.9888 | 0.6087 | | 1.3337 | 67.8 | 4000 | 1.8137 | 0.6877 | 0.3407 | | 0.6504 | 101.69 | 6000 | 2.0718 | 0.6245 | 0.3229 | | 0.404 | 135.59 | 8000 | 2.2246 | 0.6004 | 0.3221 | | 0.2877 | 169.49 | 10000 | 2.2624 | 0.5836 | 0.3107 | | 0.2149 | 203.39 | 12000 | 2.3788 | 0.5279 | 0.2802 | | 0.1693 | 237.29 | 14000 | 1.8928 | 0.5502 | 0.2937 | | 0.1383 | 271.19 | 16000 | 2.7520 | 0.5725 | 0.3103 | | 0.1169 | 305.08 | 18000 | 2.2552 | 0.5446 | 0.2968 | | 0.1011 | 338.98 | 20000 | 2.6794 | 0.5725 | 0.3119 | | 0.0996 | 372.88 | 22000 | 2.4704 | 0.5595 | 0.3142 | | 0.0665 | 406.78 | 24000 | 2.9073 | 0.5836 | 0.3194 | | 0.0538 | 440.68 | 26000 | 3.1357 | 0.5632 | 0.3213 | | 0.0538 | 474.58 | 28000 | 2.5639 | 0.5613 | 0.3091 | | 0.0493 | 508.47 | 30000 | 3.3801 | 0.5613 | 0.3119 | | 0.0451 | 542.37 | 32000 | 3.5469 | 0.5428 | 0.3158 | | 0.0307 | 576.27 | 34000 | 4.2243 | 0.5390 | 0.3126 | | 0.0301 | 610.17 | 36000 | 3.6666 | 0.5297 | 0.2929 | | 0.0269 | 644.07 | 38000 | 3.2164 | 0.5 | 0.2838 | | 0.0182 | 677.97 | 40000 | 3.0557 | 0.4963 | 0.2779 | | 0.0191 | 711.86 | 42000 | 3.5190 | 0.5130 | 0.2921 | | 0.0133 | 745.76 | 44000 | 3.4355 | 0.4907 | 0.2802 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.12.0+cu113 - Datasets 1.18.3 - Tokenizers 0.12.1
Someman/pegasus-samsum
Someman
2022-07-23T13:20:32Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "pegasus", "text2text-generation", "generated_from_trainer", "dataset:samsum", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-07-23T07:30:10Z
--- tags: - generated_from_trainer datasets: - samsum model-index: - name: pegasus-samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-samsum This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.4884 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.6902 | 0.54 | 500 | 1.4884 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Siyong/M
Siyong
2022-07-23T10:51:07Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-23T07:38:42Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: Millad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Millad This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2265 - Wer: 0.5465 - Cer: 0.3162 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 4000 - num_epochs: 750 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:------:|:-----:|:---------------:|:------:|:------:| | 3.2911 | 33.9 | 2000 | 2.2097 | 0.9963 | 0.6047 | | 1.3419 | 67.8 | 4000 | 1.9042 | 0.7007 | 0.3565 | | 0.6542 | 101.69 | 6000 | 1.7195 | 0.5985 | 0.3194 | | 0.373 | 135.59 | 8000 | 2.2219 | 0.6078 | 0.3241 | | 0.2805 | 169.49 | 10000 | 2.3114 | 0.6320 | 0.3304 | | 0.2014 | 203.39 | 12000 | 2.6898 | 0.6338 | 0.3597 | | 0.1611 | 237.29 | 14000 | 2.7808 | 0.6041 | 0.3379 | | 0.1265 | 271.19 | 16000 | 2.8304 | 0.5632 | 0.3289 | | 0.1082 | 305.08 | 18000 | 2.8373 | 0.5874 | 0.3344 | | 0.103 | 338.98 | 20000 | 2.8580 | 0.5743 | 0.3292 | | 0.0854 | 372.88 | 22000 | 2.5413 | 0.5539 | 0.3186 | | 0.0675 | 406.78 | 24000 | 2.5523 | 0.5502 | 0.3229 | | 0.0531 | 440.68 | 26000 | 2.9369 | 0.5483 | 0.3142 | | 0.0504 | 474.58 | 28000 | 3.1416 | 0.5595 | 0.3225 | | 0.0388 | 508.47 | 30000 | 2.5655 | 0.5390 | 0.3111 | | 0.0396 | 542.37 | 32000 | 3.1923 | 0.5558 | 0.3178 | | 0.0274 | 576.27 | 34000 | 2.9235 | 0.5520 | 0.3257 | | 0.0361 | 610.17 | 36000 | 3.3828 | 0.5762 | 0.3312 | | 0.02 | 644.07 | 38000 | 3.3822 | 0.5874 | 0.3466 | | 0.0176 | 677.97 | 40000 | 3.1191 | 0.5539 | 0.3209 | | 0.0181 | 711.86 | 42000 | 3.2022 | 0.5576 | 0.3237 | | 0.0124 | 745.76 | 44000 | 3.2265 | 0.5465 | 0.3162 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.12.0+cu113 - Datasets 1.18.3 - Tokenizers 0.12.1
sudo-s/robot22
sudo-s
2022-07-23T10:42:11Z
57
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-07-23T10:34:24Z
--- license: apache-2.0 tags: - image-classification - generated_from_trainer metrics: - accuracy model-index: - name: robot22 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robot22 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sudo-s/herbier_mesuem6 dataset. It achieves the following results on the evaluation set: - Loss: 2.5674 - Accuracy: 0.5077 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.9154 | 0.23 | 100 | 3.8417 | 0.2213 | | 3.1764 | 0.47 | 200 | 3.2243 | 0.3201 | | 2.8186 | 0.7 | 300 | 2.7973 | 0.4284 | | 2.632 | 0.93 | 400 | 2.5674 | 0.5077 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0 - Datasets 2.3.2 - Tokenizers 0.12.1
valurank/headline_similarities
valurank
2022-07-23T10:21:47Z
4
2
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "en", "arxiv:1904.06472", "arxiv:2102.07033", "arxiv:2104.08727", "arxiv:1704.05179", "arxiv:1810.09305", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-07-23T10:21:35Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity language: en license: apache-2.0 datasets: - s2orc - flax-sentence-embeddings/stackexchange_xml - MS Marco - gooaq - yahoo_answers_topics - code_search_net - search_qa - eli5 - snli - multi_nli - wikihow - natural_questions - trivia_qa - embedding-data/sentence-compression - embedding-data/flickr30k-captions - embedding-data/altlex - embedding-data/simple-wiki - embedding-data/QQP - embedding-data/SPECTER - embedding-data/PAQ_pairs - embedding-data/WikiAnswers --- # all-mpnet-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-mpnet-base-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-mpnet-base-v2') model = AutoModel.from_pretrained('sentence-transformers/all-mpnet-base-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-mpnet-base-v2) ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 384 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
kmkarakaya/turkishReviews-ds-mini
kmkarakaya
2022-07-23T09:06:24Z
6
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-07-07T13:29:04Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: turkishReviews-ds-mini results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # turkishReviews-ds-mini This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 9.1630 - Validation Loss: 9.2431 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -896, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 10.2672 | 9.9647 | 0 | | 9.6445 | 9.6190 | 1 | | 9.1630 | 9.2431 | 2 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.8.2 - Datasets 2.3.2 - Tokenizers 0.12.1
Chris1/q-FrozenLake-v1-8x8-no_slippery
Chris1
2022-07-23T08:48:05Z
0
0
null
[ "FrozenLake-v1-8x8-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-07-23T08:47:59Z
--- tags: - FrozenLake-v1-8x8-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-8x8-no_slippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-8x8-no_slippery type: FrozenLake-v1-8x8-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Chris1/q-FrozenLake-v1-8x8-no_slippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
SummerChiam/pond
SummerChiam
2022-07-23T07:47:49Z
51
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-07-22T18:26:03Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: pond results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9909297227859497 --- # pond Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### Algae0 ![Algae0](images/Algae0.png) #### Boiling0 ![Boiling0](images/Boiling0.png) #### BoilingNight0 ![BoilingNight0](images/BoilingNight0.png) #### Normal0 ![Normal0](images/Normal0.png) #### NormalCement0 ![NormalCement0](images/NormalCement0.png) #### NormalNight0 ![NormalNight0](images/NormalNight0.png) #### NormalRain0 ![NormalRain0](images/NormalRain0.png)
Yuchen/muril-large-cased-hita-qa
Yuchen
2022-07-23T07:01:06Z
13
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- thumbnail: https://huggingface.co/front/thumbnails/google.png license: apache-2.0 --- # Question Answering model for Hindi and Tamil This model is part of the ensemble that ranked 4/943 in the [Hindi and Tamil Question Answering](https://www.kaggle.com/c/chaii-hindi-and-tamil-question-answering) competition held by Google Research India at Kaggle. ``` from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("Yuchen/muril-large-cased-hita-qa") model = AutoModelForQuestionAnswering.from_pretrained("Yuchen/muril-large-cased-hita-qa") ```
SushantGautam/SoccerSum-NarSum
SushantGautam
2022-07-23T06:45:09Z
3
0
transformers
[ "transformers", "pytorch", "led", "text2text-generation", "generated_from_trainer", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-07-13T18:51:19Z
--- language: - en tags: - generated_from_trainer metrics: - rouge model-index: - name: SportsSum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SportsSum This model is a fine-tuned version of [allenai/led-base-16384-ms2](https://huggingface.co/allenai/led-base-16384-ms2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2759 - Rouge1: 52.3608 - Rouge2: 27.6526 - Rougel: 31.8509 - Rougelsum: 49.9086 - Gen Len: 248.1199 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 36 - eval_batch_size: 36 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.9.0+cu111 - Datasets 2.3.2 - Tokenizers 0.12.1
marice/ppo-LunarLander-v2
marice
2022-07-23T06:29:26Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-07-23T06:28:56Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 194.16 +/- 29.74 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
jags/floraldiffusion
jags
2022-07-23T05:47:56Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-07-19T19:43:01Z
--- license: mit --- Floral Diffusion V1 Floral diffusion is a trained model set of 10 K floral sets of 512 kb size images that have been trained on 256 x 256 diffusion model. custom model settings model_config.update({ 'attention_resolutions': '16', 'class_cond': False, 'diffusion_steps': 1000, 'rescale_timesteps': True, 'timestep_respacing': 'ddim100', 'image_size': 256, 'learn_sigma': True, 'noise_schedule': 'linear', 'num_channels': 128, 'num_head_channels': 64, 'num_res_blocks': 2, 'resblock_updown': True, 'use_checkpoint': use_checkpoint, 'use_fp16': True, 'use_scale_shift_norm': False, } FloralDiffusion is a custom diffusion model trained by @jags111. It can be used to create wonderful floral styled images. To use it you can use FloralDiffusion as a selection in the DD version. If you create a fun image with this model, please show your result and <a href= "https://twitter.com/jags111"> [@jags111] </a> #floraldiffusion Join us in Patreon and extend support <a href="https://www.patreon.com/jags111"> [patreon]</a>
bigmorning/distilbert_final_0005
bigmorning
2022-07-23T05:09:49Z
4
0
transformers
[ "transformers", "tf", "distilbert", "fill-mask", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-23T05:03:57Z
--- tags: - generated_from_keras_callback model-index: - name: distilbert_final_0005 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_final_0005 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.9295 - Validation Loss: 0.9157 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.9319 | 0.9178 | 0 | | 0.9310 | 0.9167 | 1 | | 0.9301 | 0.9170 | 2 | | 0.9300 | 0.9161 | 3 | | 0.9295 | 0.9157 | 4 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.8.2 - Datasets 2.3.2 - Tokenizers 0.12.1
huggingtweets/fifteenai
huggingtweets
2022-07-23T04:16:18Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/fifteenai/1658549683215/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1513191641921765388/rToX3RpX_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">15</div> <div style="text-align: center; font-size: 14px;">@fifteenai</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from 15. | Data | 15 | | --- | --- | | Tweets downloaded | 111 | | Retweets | 9 | | Short tweets | 10 | | Tweets kept | 92 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/169wgrhk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @fifteenai's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/390dyi5s) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/390dyi5s/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/fifteenai') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
shivi/shiftViT-Model
shivi
2022-07-23T02:00:17Z
0
0
keras
[ "keras", "tensorboard", "tf-keras", "ShiftVit", "Image Classification", "region:us" ]
null
2022-07-23T01:59:31Z
--- library_name: keras tags: - ShiftVit - Image Classification --- ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: | Hyperparameters | Value | | :-- | :-- | | name | AdamW | | learning_rate.class_name | WarmUpCosine | | learning_rate.config.lr_start | 1e-05 | | learning_rate.config.lr_max | 0.001 | | learning_rate.config.total_steps | 15625 | | learning_rate.config.warmup_steps | 2343 | | decay | 0.0 | | beta_1 | 0.8999999761581421 | | beta_2 | 0.9990000128746033 | | epsilon | 1e-07 | | amsgrad | False | | weight_decay | 9.999999747378752e-05 | | exclude_from_weight_decay | None | | training_precision | float32 | ## Model Plot <details> <summary>View Model Plot</summary> ![Model Image](./model.png) </details>
Chris1/q-FrozenLake-v1-8x8-Slippery
Chris1
2022-07-23T00:32:56Z
0
0
null
[ "FrozenLake-v1-8x8", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-07-23T00:32:49Z
--- tags: - FrozenLake-v1-8x8 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-8x8-Slippery results: - metrics: - type: mean_reward value: 0.50 +/- 0.50 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-8x8 type: FrozenLake-v1-8x8 --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Chris1/q-FrozenLake-v1-8x8-Slippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
thegenerativegeneration/ukiyoe-diffusion-256
thegenerativegeneration
2022-07-23T00:28:47Z
0
1
null
[ "discodiffusion", "guideddiffusion", "dataset:wikiart", "region:us" ]
null
2022-07-04T11:56:56Z
--- tags: - discodiffusion - guideddiffusion thumbnail: https://de.gravatar.com/userimage/52045156/8ab369c1d246e65bda88813ce7c4cb81.jpeg datasets: - wikiart --- # Ukiyo-e Diffusion If you make something using these models, you're welcome to mention me [@thegenerativegeneration](https://www.instagram.com/thegenerativegeneration/) Named by dataset used. Current and best version is [models/ukiyoe-all/v1/ema_0.9999_056000.pt](models/ukiyoe-all/v1/ema_0.9999_056000.pt) # Current Plans * clean dataset * remove borders * remove some of the samples with text in them # Models ## Ukiyo-e-all ### v1 [models/ukiyoe-all/v1/ema_0.9999_056000.pt](models/ukiyoe-all/v1/ema_0.9999_056000.pt) Model configuration is: ```python model_config = { 'attention_resolutions': '32, 16, 8', 'class_cond': False, 'image_size': 256, 'learn_sigma': True, 'rescale_timesteps': True, 'noise_schedule': 'linear', 'num_channels': 128, 'num_heads': 4, 'num_res_blocks': 2, 'resblock_updown': True, 'use_checkpoint': True, 'use_fp16': True, 'use_scale_shift_norm': True, } ``` #### Tips - Results closest to original training data are achieved by turning off the secondary model in Disco Diffusion. - Turning secondary model on can lead to very creative results - It is not necessary to specify Ukiyo-e as artstyle to get ukiyo-e-like images. #### Examples If you make something nice using these models, I would like to link your image. ##### Secondary Off ![](models/ukiyoe-all/v1/images/secondary_off_3.png) ![](models/ukiyoe-all/v1/images/secondary_off_0.png) ![](models/ukiyoe-all/v1/images/secondary_off_1.png) ![](models/ukiyoe-all/v1/images/secondary_off_2.png) ##### Secondary On ![](models/ukiyoe-all/v1/images/secondary_on_0.png) ![](models/ukiyoe-all/v1/images/secondary_on_1.png) ![](models/ukiyoe-all/v1/images/secondary_on_2.png) #### About Trained from scratch on a ~170000 images corpus of [ukiyo-e.org](https://ukiyo-e.org) filtered by [colorfulness](https://pyimagesearch.com/2017/06/05/computing-image-colorfulness-with-opencv-and-python/ ) >= 5. ## (Deprecated) Ukiyo-e-few [models/ukiyoe-few/v1/ukiyoe_diffusion_256_022000.pt](models/ukiyoe-few/v1/ukiyoe_diffusion_256_022000.pt) Finetuned on 5224 images from Wikiart (1168) and ? (). Model configuration is ```python model_config = { 'attention_resolutions': '16', 'class_cond': False, 'diffusion_steps': 1000, 'rescale_timesteps': True, 'timestep_respacing': 'ddim100', 'image_size': 256, 'learn_sigma': True, 'noise_schedule': 'linear', 'num_channels': 128, 'num_heads': 1, 'num_res_blocks': 2, 'use_checkpoint': True, 'use_scale_shift_norm': False } ``` Trained using a fork of [guided-diffusion-sxela](https://github.com/thegenerativegeneration/guided-diffusion-sxela). Added random crop which did not lead to good results.
Chris1/q-FrozenLake-v1-8x8-noSlippery
Chris1
2022-07-23T00:20:43Z
0
0
null
[ "FrozenLake-v1-8x8-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-07-13T13:35:39Z
--- tags: - FrozenLake-v1-8x8-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-8x8-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-8x8-no_slippery type: FrozenLake-v1-8x8-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Chris1/q-FrozenLake-v1-8x8-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
huggingtweets/luciengreaves-pontifex
huggingtweets
2022-07-23T00:00:09Z
4
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-07-22T23:57:54Z
--- language: en thumbnail: http://www.huggingtweets.com/luciengreaves-pontifex/1658534403996/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/666311094256971779/rhb7qkCD_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/507818066814590976/KNG-IkT9_400x400.jpeg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Lucien Greaves & Pope Francis</div> <div style="text-align: center; font-size: 14px;">@luciengreaves-pontifex</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Lucien Greaves & Pope Francis. | Data | Lucien Greaves | Pope Francis | | --- | --- | --- | | Tweets downloaded | 3197 | 3250 | | Retweets | 536 | 0 | | Short tweets | 379 | 103 | | Tweets kept | 2282 | 3147 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/q0nkdf60/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @luciengreaves-pontifex's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2y98dgmx) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2y98dgmx/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/luciengreaves-pontifex') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Evelyn18/roberta-base-spanish-squades-modelo1
Evelyn18
2022-07-22T23:02:37Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "question-answering", "generated_from_trainer", "dataset:becasv2", "endpoints_compatible", "region:us" ]
question-answering
2022-07-22T22:55:11Z
--- tags: - generated_from_trainer datasets: - becasv2 model-index: - name: roberta-base-spanish-squades-modelo1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-spanish-squades-modelo1 This model is a fine-tuned version of [IIC/roberta-base-spanish-squades](https://huggingface.co/IIC/roberta-base-spanish-squades) on the becasv2 dataset. It achieves the following results on the evaluation set: - Loss: 5.7001 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 11 - eval_batch_size: 11 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 6 | 2.7892 | | No log | 2.0 | 12 | 3.7037 | | No log | 3.0 | 18 | 5.1221 | | No log | 4.0 | 24 | 4.5988 | | No log | 5.0 | 30 | 5.9202 | | No log | 6.0 | 36 | 5.0345 | | No log | 7.0 | 42 | 4.4421 | | No log | 8.0 | 48 | 4.6969 | | No log | 9.0 | 54 | 5.2084 | | No log | 10.0 | 60 | 5.7001 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
tmgondal/bert-finetuned-squad
tmgondal
2022-07-22T21:13:25Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-07-22T18:44:10Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
vish88/roberta-large-mnli-fer-finetuned
vish88
2022-07-22T20:30:58Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-15T17:41:22Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-large-mnli-fer-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-mnli-fer-finetuned This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6940 - Accuracy: 0.5005 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7049 | 1.0 | 554 | 0.6895 | 0.5750 | | 0.6981 | 2.0 | 1108 | 0.7054 | 0.5005 | | 0.7039 | 3.0 | 1662 | 0.6936 | 0.5005 | | 0.6976 | 4.0 | 2216 | 0.6935 | 0.4995 | | 0.6991 | 5.0 | 2770 | 0.6940 | 0.5005 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
scottstots/roberta-base-prop-16-train-set
scottstots
2022-07-22T20:18:31Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-01T18:28:56Z
--- license: mit tags: - generated_from_trainer model-index: - name: roberta-base-prop-16-train-set results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-prop-16-train-set This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
jeanconstantin/causal_bert_fr
jeanconstantin
2022-07-22T19:53:23Z
5
0
transformers
[ "transformers", "pytorch", "camembert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-06T20:27:18Z
Un modèle français entrainé pour reconnaître les relations discursives causales. Le modèle reçoit 2 morceaux de textes et estime la probabilité que leur relation soit une relation de raison, résultat ou non causale. Ce modèle a été entrainé avec la Penn Discourse Tree Bank 2 (PDTB2), base de données anglaise de référence pour les relations discursive. PDTB2 a été automatiquement traduite en Français afin de fine-tuner le modèle pré-entrainé CamemBERT-large. Le modèle peut être chargé via la librairie CamemBERT: CamembertForSequenceClassification. Avant d'être traité, le texte doit être tokenisé via le tokenizer CamemBERT: CamembertTokenizer.
masterdezign/ppo-CarRacing-v0-10M
masterdezign
2022-07-22T19:28:05Z
2
0
stable-baselines3
[ "stable-baselines3", "CarRacing-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-07-22T19:27:11Z
--- library_name: stable-baselines3 tags: - CarRacing-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 65.27 +/- 147.53 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: CarRacing-v0 type: CarRacing-v0 --- # **PPO** Agent playing **CarRacing-v0** This is a trained model of a **PPO** agent playing **CarRacing-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
huggingtweets/deepleffen-falco-tsm_leffen
huggingtweets
2022-07-22T19:10:49Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-07-22T19:09:51Z
--- language: en thumbnail: http://www.huggingtweets.com/deepleffen-falco-tsm_leffen/1658517045179/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1241879678455078914/e2EdZIrr_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1527824997388935168/-Ohf5n-I_400x400.png&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1547974425718300675/wvQuPBGR_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Deep Leffen Bot & nick & TSM FTX Leffen</div> <div style="text-align: center; font-size: 14px;">@deepleffen-falco-tsm_leffen</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Deep Leffen Bot & nick & TSM FTX Leffen. | Data | Deep Leffen Bot | nick | TSM FTX Leffen | | --- | --- | --- | --- | | Tweets downloaded | 591 | 3249 | 3221 | | Retweets | 14 | 180 | 285 | | Short tweets | 27 | 582 | 282 | | Tweets kept | 550 | 2487 | 2654 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/13ch35ln/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deepleffen-falco-tsm_leffen's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1pw6etfi) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1pw6etfi/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/deepleffen-falco-tsm_leffen') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
osanseviero/platzi-test
osanseviero
2022-07-22T18:15:02Z
5
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-22T18:11:18Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: platzi-test results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-test This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 9375, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.20.1 - TensorFlow 2.8.2 - Datasets 2.3.2 - Tokenizers 0.12.1
huggingtweets/deepleffen-tsm_leffen
huggingtweets
2022-07-22T17:50:36Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-07-22T17:49:13Z
--- language: en thumbnail: http://www.huggingtweets.com/deepleffen-tsm_leffen/1658512231427/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1241879678455078914/e2EdZIrr_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1547974425718300675/wvQuPBGR_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Deep Leffen Bot & TSM FTX Leffen</div> <div style="text-align: center; font-size: 14px;">@deepleffen-tsm_leffen</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Deep Leffen Bot & TSM FTX Leffen. | Data | Deep Leffen Bot | TSM FTX Leffen | | --- | --- | --- | | Tweets downloaded | 591 | 3249 | | Retweets | 14 | 291 | | Short tweets | 27 | 283 | | Tweets kept | 550 | 2675 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3lq4lpvp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deepleffen-tsm_leffen's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1v9tktg9) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1v9tktg9/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/deepleffen-tsm_leffen') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
heriosousa/testpyramidsrnd
heriosousa
2022-07-22T17:50:31Z
7
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2022-07-22T17:50:26Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: heriosousa/testpyramidsrnd 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
DL4NLP-Group105/xtremedistil-l12-h384-uncased-hotpot_qa
DL4NLP-Group105
2022-07-22T16:57:12Z
0
0
null
[ "region:us" ]
null
2022-07-22T16:55:10Z
Language model: xtremedistil-l12-h384-uncased Language: English Downstream-task: xtremedistil-l12-h384-uncased Training data: hotpot_qa Eval data: hotpot_qa EM: F1: GroupID:105
llei/xtremedistil-l12-h384-uncased-HotpotQA
llei
2022-07-22T16:52:10Z
0
0
null
[ "region:us" ]
null
2022-07-22T16:44:04Z
Language model:xtremedistil-l12-h384-uncased Language: English Training data: hotpot_qa Eval data: hotpot_qa Code: See an example QA pipeline on Haystack EM:46.4 F1:64.6 GroupId:105
FabioDataGeek/distilbert-base-uncased-finetuned-emotion
FabioDataGeek
2022-07-22T16:02:35Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.926 - name: F1 type: f1 value: 0.9258450981645597 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2196 - Accuracy: 0.926 - F1: 0.9258 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8279 | 1.0 | 250 | 0.3208 | 0.9025 | 0.8979 | | 0.2538 | 2.0 | 500 | 0.2196 | 0.926 | 0.9258 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Eleven/distilbert-base-uncased-finetuned-emotion
Eleven
2022-07-22T15:05:00Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-06-27T17:59:32Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2263 - Accuracy: 0.9225 - F1: 0.9221 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8571 | 1.0 | 250 | 0.3333 | 0.902 | 0.8982 | | 0.2507 | 2.0 | 500 | 0.2263 | 0.9225 | 0.9221 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Tokenizers 0.12.1
sudo-s/exper7_mesum5
sudo-s
2022-07-22T14:31:45Z
58
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-07-22T13:42:11Z
--- license: apache-2.0 tags: - image-classification - generated_from_trainer metrics: - accuracy model-index: - name: exper7_mesum5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # exper7_mesum5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sudo-s/herbier_mesuem5 dataset. It achieves the following results on the evaluation set: - Loss: 0.5889 - Accuracy: 0.8538 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.2072 | 0.23 | 100 | 4.1532 | 0.1923 | | 3.5433 | 0.47 | 200 | 3.5680 | 0.2888 | | 3.1388 | 0.7 | 300 | 3.1202 | 0.3911 | | 2.7924 | 0.93 | 400 | 2.7434 | 0.4787 | | 2.1269 | 1.16 | 500 | 2.3262 | 0.5781 | | 1.8589 | 1.4 | 600 | 1.9754 | 0.6272 | | 1.7155 | 1.63 | 700 | 1.7627 | 0.6840 | | 1.4689 | 1.86 | 800 | 1.5937 | 0.6994 | | 1.0149 | 2.09 | 900 | 1.3168 | 0.7497 | | 0.8148 | 2.33 | 1000 | 1.1630 | 0.7615 | | 0.7159 | 2.56 | 1100 | 1.0869 | 0.7675 | | 0.7257 | 2.79 | 1200 | 0.9607 | 0.7893 | | 0.4171 | 3.02 | 1300 | 0.8835 | 0.7935 | | 0.2969 | 3.26 | 1400 | 0.8259 | 0.8130 | | 0.2405 | 3.49 | 1500 | 0.7711 | 0.8142 | | 0.2948 | 3.72 | 1600 | 0.7629 | 0.8112 | | 0.1765 | 3.95 | 1700 | 0.7117 | 0.8124 | | 0.1603 | 4.19 | 1800 | 0.6946 | 0.8237 | | 0.0955 | 4.42 | 1900 | 0.6597 | 0.8349 | | 0.0769 | 4.65 | 2000 | 0.6531 | 0.8266 | | 0.0816 | 4.88 | 2100 | 0.6335 | 0.8337 | | 0.0315 | 5.12 | 2200 | 0.6087 | 0.8402 | | 0.0368 | 5.35 | 2300 | 0.6026 | 0.8444 | | 0.0377 | 5.58 | 2400 | 0.6450 | 0.8278 | | 0.0603 | 5.81 | 2500 | 0.6564 | 0.8343 | | 0.0205 | 6.05 | 2600 | 0.6119 | 0.8467 | | 0.019 | 6.28 | 2700 | 0.6070 | 0.8479 | | 0.0249 | 6.51 | 2800 | 0.6002 | 0.8538 | | 0.0145 | 6.74 | 2900 | 0.6012 | 0.8497 | | 0.0134 | 6.98 | 3000 | 0.5991 | 0.8521 | | 0.0271 | 7.21 | 3100 | 0.5972 | 0.8503 | | 0.0128 | 7.44 | 3200 | 0.5911 | 0.8521 | | 0.0123 | 7.67 | 3300 | 0.5889 | 0.8538 | | 0.0278 | 7.91 | 3400 | 0.6135 | 0.8491 | | 0.0106 | 8.14 | 3500 | 0.5934 | 0.8533 | | 0.0109 | 8.37 | 3600 | 0.5929 | 0.8533 | | 0.0095 | 8.6 | 3700 | 0.5953 | 0.8550 | | 0.009 | 8.84 | 3800 | 0.5933 | 0.8574 | | 0.009 | 9.07 | 3900 | 0.5948 | 0.8550 | | 0.0089 | 9.3 | 4000 | 0.5953 | 0.8556 | | 0.0086 | 9.53 | 4100 | 0.5956 | 0.8544 | | 0.0085 | 9.77 | 4200 | 0.5955 | 0.8556 | | 0.0087 | 10.0 | 4300 | 0.5954 | 0.8538 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
tsrivatsav/wav2vec2-large-xls-r-300m-en-colab
tsrivatsav
2022-07-22T14:20:41Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:librispeech_asr", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-20T02:32:56Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - librispeech_asr model-index: - name: wav2vec2-large-xls-r-300m-en-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-en-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the librispeech_asr dataset. It achieves the following results on the evaluation set: - Loss: 2.7541 - Wer: 1.0 - Cer: 0.9877 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:---:|:------:| | No log | 1.94 | 33 | 2.9905 | 1.0 | 1.0 | | No log | 3.88 | 66 | 2.9023 | 1.0 | 1.0 | | No log | 5.82 | 99 | 2.8788 | 1.0 | 1.0 | | 3.7488 | 7.76 | 132 | 2.8624 | 1.0 | 1.0 | | 3.7488 | 9.71 | 165 | 2.7541 | 1.0 | 0.9877 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cpu - Datasets 1.18.3 - Tokenizers 0.12.1
Desh/SOTA
Desh
2022-07-22T14:00:38Z
0
1
null
[ "license:mit", "region:us" ]
null
2022-07-22T13:55:47Z
--- title: 🙋NLP QA Text Context Gradio👩‍⚕️ emoji: 👩‍⚕️🙋📑 colorFrom: purple colorTo: green sdk: gradio sdk_version: 3.0.5 app_file: app.py pinned: false license: mit --- Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
bigmorning/distilbert_new2_0060
bigmorning
2022-07-22T13:36:26Z
4
0
transformers
[ "transformers", "tf", "distilbert", "fill-mask", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-22T13:17:27Z
--- tags: - generated_from_keras_callback model-index: - name: distilbert_new2_0060 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_new2_0060 This model is a fine-tuned version of [/content/drive/MyDrive/Colab Notebooks/oscar/trybackup_distilbert/new_backup_0105105](https://huggingface.co//content/drive/MyDrive/Colab Notebooks/oscar/trybackup_distilbert/new_backup_0105105) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.9522 - Validation Loss: 0.9345 - Epoch: 59 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.0180 | 0.9873 | 0 | | 1.0163 | 0.9878 | 1 | | 1.0145 | 0.9856 | 2 | | 1.0139 | 0.9830 | 3 | | 1.0122 | 0.9831 | 4 | | 1.0118 | 0.9830 | 5 | | 1.0094 | 0.9800 | 6 | | 1.0075 | 0.9809 | 7 | | 1.0066 | 0.9784 | 8 | | 1.0062 | 0.9768 | 9 | | 1.0032 | 0.9751 | 10 | | 1.0023 | 0.9764 | 11 | | 1.0008 | 0.9735 | 12 | | 0.9994 | 0.9730 | 13 | | 0.9986 | 0.9761 | 14 | | 0.9975 | 0.9714 | 15 | | 0.9953 | 0.9708 | 16 | | 0.9941 | 0.9683 | 17 | | 0.9933 | 0.9681 | 18 | | 0.9920 | 0.9688 | 19 | | 0.9907 | 0.9648 | 20 | | 0.9897 | 0.9625 | 21 | | 0.9890 | 0.9642 | 22 | | 0.9873 | 0.9633 | 23 | | 0.9867 | 0.9618 | 24 | | 0.9857 | 0.9600 | 25 | | 0.9839 | 0.9598 | 26 | | 0.9827 | 0.9585 | 27 | | 0.9821 | 0.9607 | 28 | | 0.9809 | 0.9579 | 29 | | 0.9803 | 0.9561 | 30 | | 0.9786 | 0.9563 | 31 | | 0.9774 | 0.9536 | 32 | | 0.9766 | 0.9542 | 33 | | 0.9756 | 0.9523 | 34 | | 0.9743 | 0.9525 | 35 | | 0.9730 | 0.9513 | 36 | | 0.9721 | 0.9507 | 37 | | 0.9715 | 0.9506 | 38 | | 0.9702 | 0.9482 | 39 | | 0.9694 | 0.9493 | 40 | | 0.9689 | 0.9462 | 41 | | 0.9673 | 0.9463 | 42 | | 0.9669 | 0.9444 | 43 | | 0.9659 | 0.9450 | 44 | | 0.9643 | 0.9429 | 45 | | 0.9625 | 0.9432 | 46 | | 0.9625 | 0.9428 | 47 | | 0.9609 | 0.9408 | 48 | | 0.9598 | 0.9399 | 49 | | 0.9596 | 0.9407 | 50 | | 0.9590 | 0.9393 | 51 | | 0.9580 | 0.9380 | 52 | | 0.9562 | 0.9383 | 53 | | 0.9558 | 0.9369 | 54 | | 0.9543 | 0.9379 | 55 | | 0.9545 | 0.9362 | 56 | | 0.9534 | 0.9349 | 57 | | 0.9523 | 0.9338 | 58 | | 0.9522 | 0.9345 | 59 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.8.2 - Datasets 2.3.2 - Tokenizers 0.12.1
danhsf/m2m100_418M-finetuned-kde4-en-to-pt_BR
danhsf
2022-07-22T12:47:59Z
71
1
transformers
[ "transformers", "pytorch", "tensorboard", "m2m_100", "text2text-generation", "translation", "generated_from_trainer", "dataset:kde4", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-07-22T01:46:42Z
--- license: mit tags: - translation - generated_from_trainer datasets: - kde4 metrics: - bleu model-index: - name: m2m100_418M-finetuned-kde4-en-to-pt_BR results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: kde4 type: kde4 args: en-pt_BR metrics: - name: Bleu type: bleu value: 58.31959113813223 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # m2m100_418M-finetuned-kde4-en-to-pt_BR This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.5150 - Bleu: 58.3196 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
sudo-s/exper3_mesum5
sudo-s
2022-07-22T12:10:49Z
58
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-07-22T11:30:55Z
--- license: apache-2.0 tags: - image-classification - generated_from_trainer metrics: - accuracy model-index: - name: exper3_mesum5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # exper3_mesum5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sudo-s/herbier_mesuem5 dataset. It achieves the following results on the evaluation set: - Loss: 0.6366 - Accuracy: 0.8367 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.895 | 0.23 | 100 | 3.8276 | 0.1935 | | 3.1174 | 0.47 | 200 | 3.1217 | 0.3107 | | 2.6 | 0.7 | 300 | 2.5399 | 0.4207 | | 2.256 | 0.93 | 400 | 2.1767 | 0.5160 | | 1.5441 | 1.16 | 500 | 1.8086 | 0.5852 | | 1.3834 | 1.4 | 600 | 1.5565 | 0.6325 | | 1.1995 | 1.63 | 700 | 1.3339 | 0.6763 | | 1.0845 | 1.86 | 800 | 1.3299 | 0.6533 | | 0.6472 | 2.09 | 900 | 1.0679 | 0.7219 | | 0.5948 | 2.33 | 1000 | 1.0286 | 0.7124 | | 0.5565 | 2.56 | 1100 | 0.9595 | 0.7284 | | 0.4879 | 2.79 | 1200 | 0.8915 | 0.7420 | | 0.2816 | 3.02 | 1300 | 0.8159 | 0.7763 | | 0.2412 | 3.26 | 1400 | 0.7766 | 0.7911 | | 0.2015 | 3.49 | 1500 | 0.7850 | 0.7828 | | 0.274 | 3.72 | 1600 | 0.7361 | 0.7935 | | 0.1244 | 3.95 | 1700 | 0.7299 | 0.7911 | | 0.0794 | 4.19 | 1800 | 0.7441 | 0.7846 | | 0.0915 | 4.42 | 1900 | 0.7614 | 0.7941 | | 0.0817 | 4.65 | 2000 | 0.7310 | 0.8012 | | 0.0561 | 4.88 | 2100 | 0.7222 | 0.8065 | | 0.0165 | 5.12 | 2200 | 0.7515 | 0.8059 | | 0.0168 | 5.35 | 2300 | 0.6687 | 0.8213 | | 0.0212 | 5.58 | 2400 | 0.6671 | 0.8249 | | 0.0389 | 5.81 | 2500 | 0.6893 | 0.8278 | | 0.0087 | 6.05 | 2600 | 0.6839 | 0.8260 | | 0.0087 | 6.28 | 2700 | 0.6412 | 0.8320 | | 0.0077 | 6.51 | 2800 | 0.6366 | 0.8367 | | 0.0065 | 6.74 | 2900 | 0.6697 | 0.8272 | | 0.0061 | 6.98 | 3000 | 0.6510 | 0.8349 | | 0.0185 | 7.21 | 3100 | 0.6452 | 0.8367 | | 0.0059 | 7.44 | 3200 | 0.6426 | 0.8379 | | 0.0062 | 7.67 | 3300 | 0.6398 | 0.8379 | | 0.0315 | 7.91 | 3400 | 0.6397 | 0.8385 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
ameerazam08/autotrain-imdb-1166543171
ameerazam08
2022-07-22T11:56:54Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain", "en", "dataset:ameerazam08/autotrain-data-imdb", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-22T11:46:52Z
--- tags: autotrain language: en widget: - text: "I love AutoTrain 🤗" datasets: - ameerazam08/autotrain-data-imdb co2_eq_emissions: 0.07308302140406821 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1166543171 - CO2 Emissions (in grams): 0.07308302140406821 ## Validation Metrics - Loss: 0.2211569994688034 - Accuracy: 0.9138 - Precision: 0.9020598523124758 - Recall: 0.9284 - AUC: 0.9711116000000001 - F1: 0.9150404100137985 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/ameerazam08/autotrain-imdb-1166543171 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ameerazam08/autotrain-imdb-1166543171", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("ameerazam08/autotrain-imdb-1166543171", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
sudo-s/exper2_mesum5
sudo-s
2022-07-22T11:39:11Z
55
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-07-22T11:15:01Z
--- license: apache-2.0 tags: - image-classification - generated_from_trainer metrics: - accuracy model-index: - name: exper2_mesum5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # exper2_mesum5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sudo-s/herbier_mesuem5 dataset. It achieves the following results on the evaluation set: - Loss: 3.4589 - Accuracy: 0.1308 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.4265 | 0.23 | 100 | 4.3676 | 0.0296 | | 4.1144 | 0.47 | 200 | 4.1606 | 0.0544 | | 4.0912 | 0.7 | 300 | 4.1071 | 0.0509 | | 4.0361 | 0.93 | 400 | 4.0625 | 0.0669 | | 4.0257 | 1.16 | 500 | 3.9682 | 0.0822 | | 3.8846 | 1.4 | 600 | 3.9311 | 0.0834 | | 3.9504 | 1.63 | 700 | 3.9255 | 0.0698 | | 3.9884 | 1.86 | 800 | 3.9404 | 0.0722 | | 3.7191 | 2.09 | 900 | 3.8262 | 0.0935 | | 3.7952 | 2.33 | 1000 | 3.8236 | 0.0734 | | 3.8085 | 2.56 | 1100 | 3.7694 | 0.0964 | | 3.7535 | 2.79 | 1200 | 3.6757 | 0.1059 | | 3.4218 | 3.02 | 1300 | 3.6474 | 0.1095 | | 3.5172 | 3.26 | 1400 | 3.5621 | 0.1166 | | 3.5173 | 3.49 | 1500 | 3.5579 | 0.1207 | | 3.4346 | 3.72 | 1600 | 3.4817 | 0.1249 | | 3.3995 | 3.95 | 1700 | 3.4589 | 0.1308 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
sudo-s/exper1_mesum5
sudo-s
2022-07-22T11:23:22Z
59
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-07-22T11:00:05Z
--- license: apache-2.0 tags: - image-classification - generated_from_trainer metrics: - accuracy model-index: - name: exper1_mesum5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # exper1_mesum5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sudo-s/herbier_mesuem5 dataset. It achieves the following results on the evaluation set: - Loss: 0.6401 - Accuracy: 0.8278 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.9352 | 0.23 | 100 | 3.8550 | 0.1959 | | 3.1536 | 0.47 | 200 | 3.1755 | 0.2888 | | 2.6937 | 0.7 | 300 | 2.6332 | 0.4272 | | 2.3748 | 0.93 | 400 | 2.2833 | 0.4970 | | 1.5575 | 1.16 | 500 | 1.8712 | 0.5888 | | 1.4063 | 1.4 | 600 | 1.6048 | 0.6314 | | 1.1841 | 1.63 | 700 | 1.4109 | 0.6621 | | 1.0857 | 1.86 | 800 | 1.1832 | 0.7112 | | 0.582 | 2.09 | 900 | 1.0371 | 0.7479 | | 0.5971 | 2.33 | 1000 | 0.9839 | 0.7462 | | 0.4617 | 2.56 | 1100 | 0.9233 | 0.7657 | | 0.4621 | 2.79 | 1200 | 0.8417 | 0.7828 | | 0.2128 | 3.02 | 1300 | 0.7644 | 0.7970 | | 0.1883 | 3.26 | 1400 | 0.7001 | 0.8183 | | 0.1501 | 3.49 | 1500 | 0.6826 | 0.8201 | | 0.1626 | 3.72 | 1600 | 0.6568 | 0.8254 | | 0.1053 | 3.95 | 1700 | 0.6401 | 0.8278 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
FAICAM/distilled-finetuned-imdb
FAICAM
2022-07-22T10:59:52Z
5
0
transformers
[ "transformers", "tf", "distilbert", "fill-mask", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-22T10:53:28Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: FAICAM/distilled-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # FAICAM/distilled-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.8612 - Validation Loss: 2.5836 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -687, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.8612 | 2.5836 | 0 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.8.2 - Tokenizers 0.12.1
huggingtweets/thenextweb
huggingtweets
2022-07-22T10:35:30Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-07-22T10:35:23Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1306571874000830464/AZtkNMd-_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">TNW</div> <div style="text-align: center; font-size: 14px;">@thenextweb</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from TNW. | Data | TNW | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 39 | | Short tweets | 44 | | Tweets kept | 3167 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3egcwo6t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @thenextweb's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1s2bu9ha) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1s2bu9ha/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/thenextweb') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ghpkishore/distilbert-base-uncased-finetuned-emotion
ghpkishore
2022-07-22T10:09:57Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-06-10T11:51:59Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9285 - name: F1 type: f1 value: 0.9285439912301902 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2183 - Accuracy: 0.9285 - F1: 0.9285 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8381 | 1.0 | 250 | 0.3165 | 0.9075 | 0.9040 | | 0.2524 | 2.0 | 500 | 0.2183 | 0.9285 | 0.9285 | ### Framework versions - Transformers 4.19.3 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
igpaub/a2c-AntBulletEnv-v0
igpaub
2022-07-22T09:36:59Z
2
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-07-22T07:19:05Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - metrics: - type: mean_reward value: 1596.61 +/- 177.30 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
hlopez/ViT_waste_classifier
hlopez
2022-07-22T09:11:40Z
0
0
null
[ "en", "region:us" ]
null
2022-07-22T08:57:04Z
--- language: en --- ## Description This model is a ViT trained to classify waste images into 6 categories: - Organic - Carton - Glass - General - Plastics - Dangerous. The repository related to this model is: https://github.com/hectorLop/Waste-Detector Also, the code related to this model can be found here https://github.com/hectorLop/Waste-Detector/blob/main/waste_detector/classifier/sagemaker/model.py ### Requirements - Works with RGB images of size 224x224
bigmorning/distilbert_new2_0040
bigmorning
2022-07-22T08:50:47Z
3
0
transformers
[ "transformers", "tf", "distilbert", "fill-mask", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-22T08:50:33Z
--- tags: - generated_from_keras_callback model-index: - name: distilbert_new2_0040 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_new2_0040 This model is a fine-tuned version of [/content/drive/MyDrive/Colab Notebooks/oscar/trybackup_distilbert/new_backup_0105105](https://huggingface.co//content/drive/MyDrive/Colab Notebooks/oscar/trybackup_distilbert/new_backup_0105105) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.9702 - Validation Loss: 0.9482 - Epoch: 39 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.0180 | 0.9873 | 0 | | 1.0163 | 0.9878 | 1 | | 1.0145 | 0.9856 | 2 | | 1.0139 | 0.9830 | 3 | | 1.0122 | 0.9831 | 4 | | 1.0118 | 0.9830 | 5 | | 1.0094 | 0.9800 | 6 | | 1.0075 | 0.9809 | 7 | | 1.0066 | 0.9784 | 8 | | 1.0062 | 0.9768 | 9 | | 1.0032 | 0.9751 | 10 | | 1.0023 | 0.9764 | 11 | | 1.0008 | 0.9735 | 12 | | 0.9994 | 0.9730 | 13 | | 0.9986 | 0.9761 | 14 | | 0.9975 | 0.9714 | 15 | | 0.9953 | 0.9708 | 16 | | 0.9941 | 0.9683 | 17 | | 0.9933 | 0.9681 | 18 | | 0.9920 | 0.9688 | 19 | | 0.9907 | 0.9648 | 20 | | 0.9897 | 0.9625 | 21 | | 0.9890 | 0.9642 | 22 | | 0.9873 | 0.9633 | 23 | | 0.9867 | 0.9618 | 24 | | 0.9857 | 0.9600 | 25 | | 0.9839 | 0.9598 | 26 | | 0.9827 | 0.9585 | 27 | | 0.9821 | 0.9607 | 28 | | 0.9809 | 0.9579 | 29 | | 0.9803 | 0.9561 | 30 | | 0.9786 | 0.9563 | 31 | | 0.9774 | 0.9536 | 32 | | 0.9766 | 0.9542 | 33 | | 0.9756 | 0.9523 | 34 | | 0.9743 | 0.9525 | 35 | | 0.9730 | 0.9513 | 36 | | 0.9721 | 0.9507 | 37 | | 0.9715 | 0.9506 | 38 | | 0.9702 | 0.9482 | 39 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.8.2 - Datasets 2.3.2 - Tokenizers 0.12.1
ptrsxu/bert-base-chinese
ptrsxu
2022-07-22T08:09:06Z
3
0
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "zh", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-10T06:48:33Z
--- language: zh --- # Bert-base-chinese ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [How to Get Started With the Model](#how-to-get-started-with-the-model) # Model Details - **Model Description:** This model has been pre-trained for Chinese, training and random input masking has been applied independently to word pieces (as in the original BERT paper). - **Developed by:** HuggingFace team - **Model Type:** Fill-Mask - **Language(s):** Chinese - **License:** [More Information needed] - **Parent Model:** See the [BERT base uncased model](https://huggingface.co/bert-base-uncased) for more information about the BERT base model. ## Uses #### Direct Use This model can be used for masked language modeling ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## Training #### Training Procedure * **type_vocab_size:** 2 * **vocab_size:** 21128 * **num_hidden_layers:** 12 #### Training Data [More Information Needed] ## Evaluation #### Results [More Information Needed] ## How to Get Started With the Model ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese") model = AutoModelForMaskedLM.from_pretrained("bert-base-chinese") ```
semy/hf-model-full-0
semy
2022-07-22T07:02:08Z
5
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-19T10:07:46Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: hf-model-full-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hf-model-full-0 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4295 - Accuracy: 0.802 - F1: 0.802 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----:| | 0.9446 | 1.0 | 563 | 0.4208 | 0.793 | 0.793 | | 0.1259 | 2.0 | 1126 | 0.4295 | 0.802 | 0.802 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
RupE/xlm-roberta-base-finetuned-panx-en
RupE
2022-07-22T05:50:25Z
3
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-07-22T05:47:33Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-en results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.en split: train args: PAN-X.en metrics: - name: F1 type: f1 value: 0.5541666666666666 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.6380 - F1: 0.5542 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 13 | 1.0388 | 0.1801 | | No log | 2.0 | 26 | 0.7545 | 0.5053 | | No log | 3.0 | 39 | 0.6380 | 0.5542 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
okho0653/Bio_ClinicalBERT-zero-shot-finetuned-50cad
okho0653
2022-07-22T05:42:33Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-22T05:29:59Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: Bio_ClinicalBERT-zero-shot-finetuned-50cad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Bio_ClinicalBERT-zero-shot-finetuned-50cad This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1475 - Accuracy: 0.5 - F1: 0.6667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
jaeyeon/korean-aihub-learning-3
jaeyeon
2022-07-22T05:35:44Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-20T10:44:32Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: korean-aihub-learning-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # korean-aihub-learning-3 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2854 - Wer: 0.7921 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 0.99 | 35 | 45.5713 | 1.0 | | No log | 1.99 | 70 | 24.4376 | 1.0 | | 35.4145 | 2.99 | 105 | 18.3030 | 1.0 | | 35.4145 | 3.99 | 140 | 12.6702 | 1.0 | | 35.4145 | 4.99 | 175 | 7.4939 | 1.0 | | 11.687 | 5.99 | 210 | 4.9592 | 1.0 | | 11.687 | 6.99 | 245 | 4.6777 | 1.0 | | 11.687 | 7.99 | 280 | 4.6597 | 1.0 | | 4.8003 | 8.99 | 315 | 4.6777 | 1.0 | | 4.8003 | 9.99 | 350 | 4.7003 | 1.0 | | 4.8003 | 10.99 | 385 | 4.6129 | 1.0 | | 4.6383 | 11.99 | 420 | 4.6209 | 1.0 | | 4.6383 | 12.99 | 455 | 4.6035 | 1.0 | | 4.6383 | 13.99 | 490 | 4.6166 | 1.0 | | 4.577 | 14.99 | 525 | 4.6026 | 1.0 | | 4.577 | 15.99 | 560 | 4.5337 | 1.0 | | 4.577 | 16.99 | 595 | 4.5284 | 1.0 | | 4.5124 | 17.99 | 630 | 4.5710 | 1.0 | | 4.5124 | 18.99 | 665 | 4.5223 | 1.0 | | 4.3818 | 19.99 | 700 | 4.4472 | 1.0 | | 4.3818 | 20.99 | 735 | 4.4272 | 0.9977 | | 4.3818 | 21.99 | 770 | 4.4160 | 0.9977 | | 4.2796 | 22.99 | 805 | 4.3741 | 0.9988 | | 4.2796 | 23.99 | 840 | 4.3087 | 1.0 | | 4.2796 | 24.99 | 875 | 4.2336 | 1.0 | | 4.0489 | 25.99 | 910 | 4.1352 | 0.9988 | | 4.0489 | 26.99 | 945 | 4.0669 | 1.0 | | 4.0489 | 27.99 | 980 | 3.8551 | 0.9988 | | 3.6122 | 28.99 | 1015 | 3.6699 | 0.9919 | | 3.6122 | 29.99 | 1050 | 3.4580 | 0.9781 | | 3.6122 | 30.99 | 1085 | 3.1899 | 0.9434 | | 2.8886 | 31.99 | 1120 | 3.0746 | 0.9550 | | 2.8886 | 32.99 | 1155 | 2.8143 | 0.9353 | | 2.8886 | 33.99 | 1190 | 2.7004 | 0.9122 | | 2.0277 | 34.99 | 1225 | 2.5284 | 0.9076 | | 2.0277 | 35.99 | 1260 | 2.4677 | 0.8972 | | 2.0277 | 36.99 | 1295 | 2.3426 | 0.8568 | | 1.2486 | 37.99 | 1330 | 2.2456 | 0.8822 | | 1.2486 | 38.99 | 1365 | 2.3250 | 0.9238 | | 0.7572 | 39.99 | 1400 | 2.2832 | 0.8557 | | 0.7572 | 40.99 | 1435 | 2.2671 | 0.8406 | | 0.7572 | 41.99 | 1470 | 2.3070 | 0.8857 | | 0.4768 | 42.99 | 1505 | 2.2138 | 0.8476 | | 0.4768 | 43.99 | 1540 | 2.2034 | 0.8799 | | 0.4768 | 44.99 | 1575 | 2.2215 | 0.8487 | | 0.3362 | 45.99 | 1610 | 2.3416 | 0.8834 | | 0.3362 | 46.99 | 1645 | 2.3452 | 0.8383 | | 0.3362 | 47.99 | 1680 | 2.2449 | 0.8360 | | 0.257 | 48.99 | 1715 | 2.2249 | 0.8199 | | 0.257 | 49.99 | 1750 | 2.3455 | 0.8106 | | 0.257 | 50.99 | 1785 | 2.2537 | 0.8233 | | 0.2116 | 51.99 | 1820 | 2.2501 | 0.8025 | | 0.2116 | 52.99 | 1855 | 2.3180 | 0.8649 | | 0.2116 | 53.99 | 1890 | 2.1855 | 0.8106 | | 0.1787 | 54.99 | 1925 | 2.2140 | 0.8014 | | 0.1787 | 55.99 | 1960 | 2.3140 | 0.8453 | | 0.1787 | 56.99 | 1995 | 2.2140 | 0.8025 | | 0.1498 | 57.99 | 2030 | 2.3381 | 0.8314 | | 0.1498 | 58.99 | 2065 | 2.2591 | 0.8256 | | 0.1372 | 59.99 | 2100 | 2.2538 | 0.7979 | | 0.1372 | 60.99 | 2135 | 2.2052 | 0.7933 | | 0.1372 | 61.99 | 2170 | 2.2370 | 0.8233 | | 0.129 | 62.99 | 2205 | 2.2331 | 0.7898 | | 0.129 | 63.99 | 2240 | 2.3022 | 0.8002 | | 0.129 | 64.99 | 2275 | 2.3514 | 0.7956 | | 0.1075 | 65.99 | 2310 | 2.3303 | 0.8279 | | 0.1075 | 66.99 | 2345 | 2.2747 | 0.8025 | | 0.1075 | 67.99 | 2380 | 2.2899 | 0.8152 | | 0.0979 | 68.99 | 2415 | 2.3299 | 0.8164 | | 0.0979 | 69.99 | 2450 | 2.1819 | 0.7945 | | 0.0979 | 70.99 | 2485 | 2.2141 | 0.8222 | | 0.0973 | 71.99 | 2520 | 2.3683 | 0.8395 | | 0.0973 | 72.99 | 2555 | 2.2235 | 0.8199 | | 0.0973 | 73.99 | 2590 | 2.2474 | 0.8048 | | 0.0814 | 74.99 | 2625 | 2.3116 | 0.7968 | | 0.0814 | 75.99 | 2660 | 2.2494 | 0.7945 | | 0.0814 | 76.99 | 2695 | 2.2441 | 0.7968 | | 0.0745 | 77.99 | 2730 | 2.2489 | 0.7864 | | 0.0745 | 78.99 | 2765 | 2.2568 | 0.7921 | | 0.0741 | 79.99 | 2800 | 2.2598 | 0.7875 | | 0.0741 | 80.99 | 2835 | 2.3131 | 0.8002 | | 0.0741 | 81.99 | 2870 | 2.2719 | 0.7898 | | 0.0662 | 82.99 | 2905 | 2.2901 | 0.7875 | | 0.0662 | 83.99 | 2940 | 2.3092 | 0.7979 | | 0.0662 | 84.99 | 2975 | 2.3361 | 0.8048 | | 0.0556 | 85.99 | 3010 | 2.3308 | 0.8152 | | 0.0556 | 86.99 | 3045 | 2.3106 | 0.8164 | | 0.0556 | 87.99 | 3080 | 2.3363 | 0.8002 | | 0.0504 | 88.99 | 3115 | 2.3588 | 0.7910 | | 0.0504 | 89.99 | 3150 | 2.3528 | 0.7956 | | 0.0504 | 90.99 | 3185 | 2.3201 | 0.7794 | | 0.0496 | 91.99 | 3220 | 2.3386 | 0.7991 | | 0.0496 | 92.99 | 3255 | 2.3423 | 0.7956 | | 0.0496 | 93.99 | 3290 | 2.3312 | 0.7956 | | 0.0468 | 94.99 | 3325 | 2.3362 | 0.7968 | | 0.0468 | 95.99 | 3360 | 2.2962 | 0.7887 | | 0.0468 | 96.99 | 3395 | 2.2864 | 0.7841 | | 0.0475 | 97.99 | 3430 | 2.2870 | 0.7898 | | 0.0475 | 98.99 | 3465 | 2.2866 | 0.7898 | | 0.0411 | 99.99 | 3500 | 2.2854 | 0.7921 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
RupE/xlm-roberta-base-finetuned-panx-de
RupE
2022-07-22T05:15:36Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-07-22T04:35:55Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.de split: train args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8503293209175562 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1354 - F1: 0.8503 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 132 | 0.1757 | 0.8055 | | No log | 2.0 | 264 | 0.1372 | 0.8424 | | No log | 3.0 | 396 | 0.1354 | 0.8503 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Shunichiro/distilbert-base-uncased-finetuned-squad
Shunichiro
2022-07-22T05:11:33Z
31
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-07-06T06:58:54Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 5.0244 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 30 | 3.5643 | | No log | 2.0 | 60 | 2.4546 | | No log | 3.0 | 90 | 2.3018 | | No log | 4.0 | 120 | 2.4636 | | No log | 5.0 | 150 | 2.4736 | | No log | 6.0 | 180 | 2.5580 | | No log | 7.0 | 210 | 2.6686 | | No log | 8.0 | 240 | 2.7249 | | No log | 9.0 | 270 | 3.2596 | | No log | 10.0 | 300 | 3.5904 | | No log | 11.0 | 330 | 3.6709 | | No log | 12.0 | 360 | 3.6431 | | No log | 13.0 | 390 | 3.6343 | | No log | 14.0 | 420 | 3.8316 | | No log | 15.0 | 450 | 3.6363 | | No log | 16.0 | 480 | 3.8468 | | 0.8931 | 17.0 | 510 | 3.7114 | | 0.8931 | 18.0 | 540 | 3.8719 | | 0.8931 | 19.0 | 570 | 4.0872 | | 0.8931 | 20.0 | 600 | 4.2989 | | 0.8931 | 21.0 | 630 | 4.5494 | | 0.8931 | 22.0 | 660 | 4.2565 | | 0.8931 | 23.0 | 690 | 4.3009 | | 0.8931 | 24.0 | 720 | 4.1816 | | 0.8931 | 25.0 | 750 | 4.2583 | | 0.8931 | 26.0 | 780 | 4.2276 | | 0.8931 | 27.0 | 810 | 4.3481 | | 0.8931 | 28.0 | 840 | 4.4369 | | 0.8931 | 29.0 | 870 | 4.4891 | | 0.8931 | 30.0 | 900 | 4.5521 | | 0.8931 | 31.0 | 930 | 4.5201 | | 0.8931 | 32.0 | 960 | 4.6323 | | 0.8931 | 33.0 | 990 | 4.4766 | | 0.0297 | 34.0 | 1020 | 4.7612 | | 0.0297 | 35.0 | 1050 | 4.9057 | | 0.0297 | 36.0 | 1080 | 4.7580 | | 0.0297 | 37.0 | 1110 | 4.6351 | | 0.0297 | 38.0 | 1140 | 4.6495 | | 0.0297 | 39.0 | 1170 | 4.5980 | | 0.0297 | 40.0 | 1200 | 4.6370 | | 0.0297 | 41.0 | 1230 | 4.6523 | | 0.0297 | 42.0 | 1260 | 4.5802 | | 0.0297 | 43.0 | 1290 | 4.6304 | | 0.0297 | 44.0 | 1320 | 4.7111 | | 0.0297 | 45.0 | 1350 | 4.7219 | | 0.0297 | 46.0 | 1380 | 4.7323 | | 0.0297 | 47.0 | 1410 | 4.9115 | | 0.0297 | 48.0 | 1440 | 4.7873 | | 0.0297 | 49.0 | 1470 | 4.9340 | | 0.0023 | 50.0 | 1500 | 5.0638 | | 0.0023 | 51.0 | 1530 | 5.0750 | | 0.0023 | 52.0 | 1560 | 4.9338 | | 0.0023 | 53.0 | 1590 | 4.9197 | | 0.0023 | 54.0 | 1620 | 4.9282 | | 0.0023 | 55.0 | 1650 | 5.0038 | | 0.0023 | 56.0 | 1680 | 4.9848 | | 0.0023 | 57.0 | 1710 | 4.9932 | | 0.0023 | 58.0 | 1740 | 5.0134 | | 0.0023 | 59.0 | 1770 | 5.0303 | | 0.0023 | 60.0 | 1800 | 5.0244 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Tokenizers 0.12.1
huggingtweets/hotwingsuk
huggingtweets
2022-07-22T03:26:48Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-07-22T03:25:34Z
--- language: en thumbnail: http://www.huggingtweets.com/hotwingsuk/1658460403599/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1280474754214957056/GKqk3gAm_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">HotWings</div> <div style="text-align: center; font-size: 14px;">@hotwingsuk</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from HotWings. | Data | HotWings | | --- | --- | | Tweets downloaded | 2057 | | Retweets | 69 | | Short tweets | 258 | | Tweets kept | 1730 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3opu8h6o/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hotwingsuk's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/bzf76pmf) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/bzf76pmf/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/hotwingsuk') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
jsalvatier/Reinforce-cartpole1
jsalvatier
2022-07-22T00:41:21Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-07-22T00:41:06Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-cartpole1 results: - metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
Gianni33/q-Taxi-v3
Gianni33
2022-07-21T22:38:49Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-07-21T22:38:44Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - metrics: - type: mean_reward value: 7.54 +/- 2.70 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Gianni33/q-FrozenLake-v1-4x4-noSlippery
Gianni33
2022-07-21T22:30:59Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-07-21T22:30:53Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Gianni33/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
TheJarmanitor/rl-class-1
TheJarmanitor
2022-07-21T21:00:51Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-07-21T19:57:23Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 277.12 +/- 19.47 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AdoubleLen/q-Taxi-v3
AdoubleLen
2022-07-21T20:40:57Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-07-21T20:40:13Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - metrics: - type: mean_reward value: 7.52 +/- 2.73 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="AdoubleLen/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Willaim/AI
Willaim
2022-07-21T20:26:26Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2022-07-21T20:26:26Z
--- license: bigscience-bloom-rail-1.0 ---
trevorj/BART_reddit_other
trevorj
2022-07-21T18:56:10Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-07-21T16:49:35Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: BART_reddit_other results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BART_reddit_other This model is a fine-tuned version of [sshleifer/distilbart-xsum-6-6](https://huggingface.co/sshleifer/distilbart-xsum-6-6) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.5792 - Rouge1: 18.5705 - Rouge2: 5.0107 - Rougel: 15.2581 - Rougelsum: 16.082 - Gen Len: 19.402 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 3.7887 | 1.0 | 1875 | 3.6044 | 18.4668 | 5.182 | 15.359 | 16.169 | 19.341 | | 3.3816 | 2.0 | 3750 | 3.5628 | 18.0998 | 4.8937 | 15.0179 | 15.7615 | 17.789 | | 3.134 | 3.0 | 5625 | 3.5792 | 18.5705 | 5.0107 | 15.2581 | 16.082 | 19.402 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
enoriega/rule_learning_margin_1mm_many_negatives_spanpred_attention
enoriega
2022-07-21T18:09:20Z
1
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "generated_from_trainer", "dataset:enoriega/odinsynth_dataset", "endpoints_compatible", "region:us" ]
null
2022-07-20T06:09:21Z
--- tags: - generated_from_trainer datasets: - enoriega/odinsynth_dataset model-index: - name: rule_learning_margin_1mm_many_negatives_spanpred_attention results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # rule_learning_margin_1mm_many_negatives_spanpred_attention This model is a fine-tuned version of [enoriega/rule_softmatching](https://huggingface.co/enoriega/rule_softmatching) on the enoriega/odinsynth_dataset dataset. It achieves the following results on the evaluation set: - Loss: 0.2369 - Margin Accuracy: 0.8923 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2000 - total_train_batch_size: 8000 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Margin Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------------:| | 0.3814 | 0.16 | 20 | 0.3909 | 0.8317 | | 0.349 | 0.32 | 40 | 0.3335 | 0.8463 | | 0.3196 | 0.48 | 60 | 0.3101 | 0.8587 | | 0.3083 | 0.64 | 80 | 0.3010 | 0.8645 | | 0.2828 | 0.8 | 100 | 0.2871 | 0.8686 | | 0.294 | 0.96 | 120 | 0.2800 | 0.8715 | | 0.2711 | 1.12 | 140 | 0.2708 | 0.8741 | | 0.2663 | 1.28 | 160 | 0.2671 | 0.8767 | | 0.2656 | 1.44 | 180 | 0.2612 | 0.8822 | | 0.2645 | 1.6 | 200 | 0.2537 | 0.8851 | | 0.2625 | 1.76 | 220 | 0.2483 | 0.8878 | | 0.2651 | 1.92 | 240 | 0.2471 | 0.8898 | | 0.2407 | 2.08 | 260 | 0.2438 | 0.8905 | | 0.2315 | 2.24 | 280 | 0.2408 | 0.8909 | | 0.2461 | 2.4 | 300 | 0.2390 | 0.8918 | | 0.2491 | 2.56 | 320 | 0.2390 | 0.8921 | | 0.2511 | 2.72 | 340 | 0.2369 | 0.8918 | | 0.2341 | 2.88 | 360 | 0.2363 | 0.8921 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0 - Datasets 2.2.1 - Tokenizers 0.12.1
iuihgisgsd/jhgifgdsg
iuihgisgsd
2022-07-21T18:01:37Z
0
0
null
[ "region:us" ]
null
2022-07-21T18:01:13Z
oghdogspsdfughuisdfhgsudfigdfg https://www.xing.com/events/new
rbiswas4/distilbert-base-uncased-finetuned-squad
rbiswas4
2022-07-21T17:48:26Z
5
0
transformers
[ "transformers", "pytorch", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-07-21T11:24:34Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1542 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2137 | 1.0 | 5533 | 1.1516 | | 0.9463 | 2.0 | 11066 | 1.1115 | | 0.7665 | 3.0 | 16599 | 1.1542 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.10.0 - Datasets 2.3.2 - Tokenizers 0.12.1
bigmorning/distilbert_new_0100
bigmorning
2022-07-21T17:14:28Z
4
0
transformers
[ "transformers", "tf", "distilbert", "fill-mask", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-21T16:14:42Z
--- tags: - generated_from_keras_callback model-index: - name: distilgpt_new_0100 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt_new_0100 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.0286 - Validation Loss: 0.9952 - Epoch: 99 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.5889 | 2.6197 | 0 | | 2.4784 | 2.2040 | 1 | | 2.1855 | 1.9980 | 2 | | 2.0181 | 1.8643 | 3 | | 1.9031 | 1.7652 | 4 | | 1.8166 | 1.6924 | 5 | | 1.7467 | 1.6360 | 6 | | 1.6904 | 1.5843 | 7 | | 1.6430 | 1.5421 | 8 | | 1.6021 | 1.5059 | 9 | | 1.5668 | 1.4761 | 10 | | 1.5359 | 1.4481 | 11 | | 1.5071 | 1.4220 | 12 | | 1.4841 | 1.4020 | 13 | | 1.4608 | 1.3797 | 14 | | 1.4399 | 1.3595 | 15 | | 1.4213 | 1.3426 | 16 | | 1.4031 | 1.3266 | 17 | | 1.3875 | 1.3113 | 18 | | 1.3735 | 1.3024 | 19 | | 1.3600 | 1.2871 | 20 | | 1.3456 | 1.2753 | 21 | | 1.3336 | 1.2648 | 22 | | 1.3214 | 1.2539 | 23 | | 1.3103 | 1.2451 | 24 | | 1.3005 | 1.2335 | 25 | | 1.2905 | 1.2258 | 26 | | 1.2815 | 1.2179 | 27 | | 1.2728 | 1.2123 | 28 | | 1.2643 | 1.2029 | 29 | | 1.2564 | 1.1980 | 30 | | 1.2494 | 1.1877 | 31 | | 1.2414 | 1.1806 | 32 | | 1.2348 | 1.1788 | 33 | | 1.2290 | 1.1699 | 34 | | 1.2209 | 1.1654 | 35 | | 1.2156 | 1.1575 | 36 | | 1.2110 | 1.1537 | 37 | | 1.2046 | 1.1499 | 38 | | 1.1986 | 1.1436 | 39 | | 1.1940 | 1.1408 | 40 | | 1.1877 | 1.1356 | 41 | | 1.1830 | 1.1314 | 42 | | 1.1779 | 1.1278 | 43 | | 1.1737 | 1.1211 | 44 | | 1.1692 | 1.1192 | 45 | | 1.1647 | 1.1163 | 46 | | 1.1611 | 1.1107 | 47 | | 1.1560 | 1.1066 | 48 | | 1.1521 | 1.1060 | 49 | | 1.1489 | 1.1002 | 50 | | 1.1440 | 1.0960 | 51 | | 1.1406 | 1.0931 | 52 | | 1.1373 | 1.0897 | 53 | | 1.1329 | 1.0855 | 54 | | 1.1302 | 1.0842 | 55 | | 1.1265 | 1.0818 | 56 | | 1.1237 | 1.0784 | 57 | | 1.1204 | 1.0737 | 58 | | 1.1173 | 1.0714 | 59 | | 1.1140 | 1.0694 | 60 | | 1.1112 | 1.0691 | 61 | | 1.1083 | 1.0668 | 62 | | 1.1044 | 1.0611 | 63 | | 1.1027 | 1.0607 | 64 | | 1.0990 | 1.0586 | 65 | | 1.0969 | 1.0545 | 66 | | 1.0944 | 1.0522 | 67 | | 1.0921 | 1.0517 | 68 | | 1.0891 | 1.0496 | 69 | | 1.0862 | 1.0457 | 70 | | 1.0828 | 1.0448 | 71 | | 1.0824 | 1.0439 | 72 | | 1.0793 | 1.0389 | 73 | | 1.0769 | 1.0375 | 74 | | 1.0740 | 1.0362 | 75 | | 1.0717 | 1.0358 | 76 | | 1.0700 | 1.0299 | 77 | | 1.0675 | 1.0312 | 78 | | 1.0639 | 1.0288 | 79 | | 1.0643 | 1.0270 | 80 | | 1.0607 | 1.0258 | 81 | | 1.0602 | 1.0233 | 82 | | 1.0568 | 1.0225 | 83 | | 1.0557 | 1.0198 | 84 | | 1.0534 | 1.0179 | 85 | | 1.0512 | 1.0165 | 86 | | 1.0495 | 1.0170 | 87 | | 1.0478 | 1.0124 | 88 | | 1.0458 | 1.0134 | 89 | | 1.0439 | 1.0104 | 90 | | 1.0418 | 1.0092 | 91 | | 1.0401 | 1.0057 | 92 | | 1.0377 | 1.0035 | 93 | | 1.0370 | 1.0037 | 94 | | 1.0345 | 1.0029 | 95 | | 1.0339 | 1.0014 | 96 | | 1.0322 | 1.0016 | 97 | | 1.0296 | 0.9986 | 98 | | 1.0286 | 0.9952 | 99 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.8.2 - Datasets 2.3.2 - Tokenizers 0.12.1
trevorj/BART_reddit_gaming
trevorj
2022-07-21T16:51:59Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-07-21T15:20:54Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: BART_reddit_gaming results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BART_reddit_gaming This model is a fine-tuned version of [sshleifer/distilbart-xsum-6-6](https://huggingface.co/sshleifer/distilbart-xsum-6-6) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.7373 - Rouge1: 18.1202 - Rouge2: 4.6045 - Rougel: 15.1273 - Rougelsum: 15.7601 - Gen Len: 18.208 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 3.864 | 1.0 | 1875 | 3.7752 | 17.3754 | 4.51 | 14.6763 | 15.22 | 16.944 | | 3.4755 | 2.0 | 3750 | 3.7265 | 17.8066 | 4.4188 | 14.9432 | 15.5396 | 18.104 | | 3.2629 | 3.0 | 5625 | 3.7373 | 18.1202 | 4.6045 | 15.1273 | 15.7601 | 18.208 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1