modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-04 00:37:20
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
537 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-04 00:37:04
card
stringlengths
11
1.01M
victorbahlangene/roberta-base-fine-Disaster-Tweets-Part3
victorbahlangene
2022-11-08T21:52:14Z
105
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-08T21:41:40Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: roberta-base-fine-Disaster-Tweets-Part3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-fine-Disaster-Tweets-Part3 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3882 - Accuracy: 0.8380 - F1: 0.8377 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 203 | 0.4632 | 0.8179 | 0.8184 | | No log | 2.0 | 406 | 0.3882 | 0.8380 | 0.8377 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
victorbahlangene/bert-base-uncased-fine-Disaster-Tweets-Part3
victorbahlangene
2022-11-08T21:22:11Z
115
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-08T21:06:14Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: bert-base-uncased-fine-Disaster-Tweets-Part3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-fine-Disaster-Tweets-Part3 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3901 - Accuracy: 0.8459 - F1: 0.8451 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 203 | 0.4036 | 0.8170 | 0.8171 | | No log | 2.0 | 406 | 0.3901 | 0.8459 | 0.8451 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
santiagoahl/vit_model_santiago_ahumada
santiagoahl
2022-11-08T20:28:28Z
189
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "dataset:beans", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-11-08T18:52:45Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - beans metrics: - accuracy model-index: - name: vit_model_santiago_ahumada results: - task: name: Image Classification type: image-classification dataset: name: beans type: beans config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 1.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit_model_santiago_ahumada This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0164 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.143 | 3.85 | 500 | 0.0164 | 1.0 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
giulio86/65
giulio86
2022-11-08T20:18:43Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2022-11-08T20:18:43Z
--- license: creativeml-openrail-m ---
huggingtweets/big___oven-codeinecucumber
huggingtweets
2022-11-08T19:32:56Z
103
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-25T19:41:48Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1579203041764442116/RSLookYD_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1571653458972794884/eaxhUsib_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Gutted & oskcar</div> <div style="text-align: center; font-size: 14px;">@big___oven-codeinecucumber</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Gutted & oskcar. | Data | Gutted | oskcar | | --- | --- | --- | | Tweets downloaded | 1761 | 2669 | | Retweets | 243 | 635 | | Short tweets | 326 | 308 | | Tweets kept | 1192 | 1726 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1qyf2pl5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @big___oven-codeinecucumber's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2rr9twhn) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2rr9twhn/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/big___oven-codeinecucumber') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
gary109/ai-light-dance_drums_ft_pretrain_wav2vec2-base
gary109
2022-11-08T19:17:51Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "gary109/AI_Light_Dance", "generated_from_trainer", "dataset:ai_light_dance", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-07T10:38:40Z
--- tags: - automatic-speech-recognition - gary109/AI_Light_Dance - generated_from_trainer datasets: - ai_light_dance model-index: - name: ai-light-dance_drums_ft_pretrain_wav2vec2-base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ai-light-dance_drums_ft_pretrain_wav2vec2-base This model is a fine-tuned version of [gary109/ai-light-dance_drums_ft_pretrain_wav2vec2-base](https://huggingface.co/gary109/ai-light-dance_drums_ft_pretrain_wav2vec2-base) on the GARY109/AI_LIGHT_DANCE - ONSET-DRUMS dataset. It achieves the following results on the evaluation set: - Loss: 1.8991 - Wer: 0.6046 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 20 - num_epochs: 200.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 0.9 | 8 | 2.0434 | 0.6226 | | 0.4739 | 1.9 | 16 | 2.1024 | 0.6247 | | 0.4693 | 2.9 | 24 | 1.9824 | 0.6211 | | 0.5139 | 3.9 | 32 | 2.2962 | 0.6429 | | 0.5081 | 4.9 | 40 | 2.2201 | 0.6292 | | 0.5081 | 5.9 | 48 | 2.1399 | 0.6208 | | 0.5785 | 6.9 | 56 | 2.1451 | 0.6417 | | 0.533 | 7.9 | 64 | 2.1184 | 0.6330 | | 0.5141 | 8.9 | 72 | 2.0230 | 0.6342 | | 0.4971 | 9.9 | 80 | 2.2137 | 0.6381 | | 0.4971 | 10.9 | 88 | 2.1159 | 0.6253 | | 0.5645 | 11.9 | 96 | 2.0966 | 0.6247 | | 0.4932 | 12.9 | 104 | 1.9249 | 0.6223 | | 0.4918 | 13.9 | 112 | 2.0445 | 0.6235 | | 0.5053 | 14.9 | 120 | 2.1317 | 0.6304 | | 0.5053 | 15.9 | 128 | 2.0723 | 0.6256 | | 0.5565 | 16.9 | 136 | 2.1390 | 0.6402 | | 0.4819 | 17.9 | 144 | 1.9556 | 0.6321 | | 0.5131 | 18.9 | 152 | 1.9886 | 0.6333 | | 0.4798 | 19.9 | 160 | 1.9700 | 0.6259 | | 0.4798 | 20.9 | 168 | 1.9771 | 0.6295 | | 0.5221 | 21.9 | 176 | 1.9880 | 0.6235 | | 0.4862 | 22.9 | 184 | 2.0994 | 0.6298 | | 0.4831 | 23.9 | 192 | 2.0521 | 0.6205 | | 0.4952 | 24.9 | 200 | 1.9838 | 0.6064 | | 0.4952 | 25.9 | 208 | 2.0319 | 0.6103 | | 0.5119 | 26.9 | 216 | 2.0419 | 0.6160 | | 0.4996 | 27.9 | 224 | 2.0073 | 0.6178 | | 0.488 | 28.9 | 232 | 2.1740 | 0.6304 | | 0.4978 | 29.9 | 240 | 2.2731 | 0.6163 | | 0.4978 | 30.9 | 248 | 2.2420 | 0.6205 | | 0.5259 | 31.9 | 256 | 2.0561 | 0.6184 | | 0.47 | 32.9 | 264 | 1.9455 | 0.6136 | | 0.5132 | 33.9 | 272 | 1.9307 | 0.6043 | | 0.4972 | 34.9 | 280 | 2.0536 | 0.6127 | | 0.4972 | 35.9 | 288 | 1.9113 | 0.6223 | | 0.5147 | 36.9 | 296 | 1.9317 | 0.6286 | | 0.4914 | 37.9 | 304 | 2.1810 | 0.6241 | | 0.472 | 38.9 | 312 | 2.1403 | 0.6160 | | 0.4825 | 39.9 | 320 | 2.1141 | 0.6094 | | 0.4825 | 40.9 | 328 | 2.2870 | 0.6031 | | 0.5138 | 41.9 | 336 | 2.1404 | 0.6181 | | 0.48 | 42.9 | 344 | 2.0243 | 0.6265 | | 0.4598 | 43.9 | 352 | 2.1117 | 0.6199 | | 0.474 | 44.9 | 360 | 2.0378 | 0.6321 | | 0.474 | 45.9 | 368 | 2.1919 | 0.6211 | | 0.4933 | 46.9 | 376 | 2.3645 | 0.6109 | | 0.4692 | 47.9 | 384 | 2.1920 | 0.6076 | | 0.4716 | 48.9 | 392 | 2.3663 | 0.6034 | | 0.4601 | 49.9 | 400 | 2.2838 | 0.6280 | | 0.4601 | 50.9 | 408 | 2.0287 | 0.6148 | | 0.4891 | 51.9 | 416 | 2.1346 | 0.6130 | | 0.4506 | 52.9 | 424 | 2.1556 | 0.6181 | | 0.4581 | 53.9 | 432 | 2.0560 | 0.6229 | | 0.4485 | 54.9 | 440 | 1.9944 | 0.5971 | | 0.4485 | 55.9 | 448 | 1.9791 | 0.6097 | | 0.4942 | 56.9 | 456 | 2.1166 | 0.6070 | | 0.4748 | 57.9 | 464 | 2.0271 | 0.6124 | | 0.4229 | 58.9 | 472 | 2.0437 | 0.6229 | | 0.45 | 59.9 | 480 | 2.1012 | 0.6142 | | 0.45 | 60.9 | 488 | 1.9151 | 0.6049 | | 0.4936 | 61.9 | 496 | 1.8991 | 0.6046 | | 0.4602 | 62.9 | 504 | 1.9813 | 0.6112 | | 0.4626 | 63.9 | 512 | 1.9372 | 0.6136 | | 0.445 | 64.9 | 520 | 1.9060 | 0.6154 | | 0.445 | 65.9 | 528 | 1.9574 | 0.6151 | | 0.4907 | 66.9 | 536 | 2.0947 | 0.6022 | | 0.4723 | 67.9 | 544 | 2.0061 | 0.6010 | | 0.4103 | 68.9 | 552 | 1.9557 | 0.6094 | | 0.4808 | 69.9 | 560 | 2.1042 | 0.6088 | | 0.4808 | 70.9 | 568 | 2.1360 | 0.6073 | | 0.4682 | 71.9 | 576 | 2.1290 | 0.6013 | | 0.4472 | 72.9 | 584 | 1.9454 | 0.5989 | | 0.4259 | 73.9 | 592 | 2.0937 | 0.6043 | | 0.4464 | 74.9 | 600 | 2.0822 | 0.6058 | | 0.4464 | 75.9 | 608 | 2.0128 | 0.6058 | | 0.4775 | 76.9 | 616 | 1.9744 | 0.6094 | | 0.4394 | 77.9 | 624 | 1.9992 | 0.6010 | | 0.418 | 78.9 | 632 | 2.1693 | 0.5947 | | 0.4384 | 79.9 | 640 | 2.1326 | 0.5923 | | 0.4384 | 80.9 | 648 | 2.1151 | 0.5950 | | 0.4971 | 81.9 | 656 | 2.1581 | 0.5923 | | 0.4176 | 82.9 | 664 | 2.0876 | 0.6013 | | 0.4312 | 83.9 | 672 | 2.1316 | 0.5935 | | 0.4408 | 84.9 | 680 | 2.2627 | 0.5971 | | 0.4408 | 85.9 | 688 | 2.2799 | 0.6112 | | 0.4678 | 86.9 | 696 | 2.1239 | 0.5989 | | 0.4288 | 87.9 | 704 | 2.1574 | 0.5983 | | 0.4157 | 88.9 | 712 | 2.2125 | 0.5908 | | 0.444 | 89.9 | 720 | 2.0542 | 0.5986 | | 0.444 | 90.9 | 728 | 2.0899 | 0.5920 | | 0.4694 | 91.9 | 736 | 2.1122 | 0.6076 | | 0.4314 | 92.9 | 744 | 2.0634 | 0.5950 | | 0.4348 | 93.9 | 752 | 2.0333 | 0.6046 | | 0.4558 | 94.9 | 760 | 2.1188 | 0.5956 | | 0.4558 | 95.9 | 768 | 2.0606 | 0.5995 | | 0.461 | 96.9 | 776 | 2.0600 | 0.5971 | | 0.4258 | 97.9 | 784 | 2.0479 | 0.6040 | | 0.4395 | 98.9 | 792 | 2.1282 | 0.6055 | | 0.4282 | 99.9 | 800 | 2.0593 | 0.6043 | | 0.4282 | 100.9 | 808 | 2.0592 | 0.5920 | | 0.4623 | 101.9 | 816 | 2.0852 | 0.5944 | | 0.4392 | 102.9 | 824 | 2.2024 | 0.5920 | | 0.4308 | 103.9 | 832 | 2.1786 | 0.5935 | | 0.4375 | 104.9 | 840 | 2.1085 | 0.5911 | | 0.4375 | 105.9 | 848 | 2.0724 | 0.5974 | | 0.4501 | 106.9 | 856 | 2.1306 | 0.5881 | | 0.4273 | 107.9 | 864 | 2.1340 | 0.5899 | | 0.4234 | 108.9 | 872 | 2.1125 | 0.5980 | | 0.4289 | 109.9 | 880 | 2.0526 | 0.6007 | | 0.4289 | 110.9 | 888 | 2.0955 | 0.5884 | | 0.478 | 111.9 | 896 | 2.1146 | 0.5872 | | 0.4143 | 112.9 | 904 | 2.2310 | 0.5899 | | 0.4193 | 113.9 | 912 | 2.2165 | 0.5899 | | 0.4159 | 114.9 | 920 | 2.1631 | 0.5941 | | 0.4159 | 115.9 | 928 | 2.1371 | 0.5938 | | 0.4776 | 116.9 | 936 | 2.0972 | 0.5935 | | 0.4143 | 117.9 | 944 | 2.1248 | 0.5917 | | 0.4022 | 118.9 | 952 | 2.1317 | 0.5956 | | 0.4346 | 119.9 | 960 | 2.1237 | 0.5992 | | 0.4346 | 120.9 | 968 | 2.0684 | 0.5935 | | 0.4564 | 121.9 | 976 | 2.0722 | 0.5947 | | 0.4243 | 122.9 | 984 | 2.1361 | 0.5884 | | 0.413 | 123.9 | 992 | 2.1207 | 0.5893 | | 0.4113 | 124.9 | 1000 | 2.0697 | 0.5837 | | 0.4113 | 125.9 | 1008 | 2.1005 | 0.5875 | | 0.4426 | 126.9 | 1016 | 2.0822 | 0.5870 | | 0.4255 | 127.9 | 1024 | 2.0572 | 0.5959 | | 0.4214 | 128.9 | 1032 | 2.0343 | 0.5935 | | 0.4042 | 129.9 | 1040 | 2.0282 | 0.5902 | | 0.4042 | 130.9 | 1048 | 2.0314 | 0.5846 | | 0.4515 | 131.9 | 1056 | 2.0621 | 0.5870 | | 0.4138 | 132.9 | 1064 | 2.0704 | 0.5938 | | 0.4289 | 133.9 | 1072 | 2.0222 | 0.5896 | | 0.3908 | 134.9 | 1080 | 2.0879 | 0.5855 | | 0.3908 | 135.9 | 1088 | 2.1068 | 0.5822 | | 0.4489 | 136.9 | 1096 | 2.0702 | 0.5837 | | 0.4191 | 137.9 | 1104 | 2.1093 | 0.5881 | | 0.4149 | 138.9 | 1112 | 2.1046 | 0.5819 | | 0.4127 | 139.9 | 1120 | 2.1729 | 0.5777 | | 0.4127 | 140.9 | 1128 | 2.1636 | 0.5810 | | 0.4449 | 141.9 | 1136 | 2.1515 | 0.5786 | | 0.3977 | 142.9 | 1144 | 2.1531 | 0.5774 | | 0.4121 | 143.9 | 1152 | 2.0857 | 0.5816 | | 0.4363 | 144.9 | 1160 | 2.1372 | 0.5822 | | 0.4363 | 145.9 | 1168 | 2.1902 | 0.5828 | | 0.4318 | 146.9 | 1176 | 2.1465 | 0.5831 | | 0.4112 | 147.9 | 1184 | 2.0697 | 0.5858 | | 0.4292 | 148.9 | 1192 | 2.0850 | 0.5837 | | 0.4182 | 149.9 | 1200 | 2.1171 | 0.5846 | | 0.4182 | 150.9 | 1208 | 2.1020 | 0.5867 | | 0.4381 | 151.9 | 1216 | 2.1052 | 0.5849 | | 0.4235 | 152.9 | 1224 | 2.1430 | 0.5864 | | 0.4173 | 153.9 | 1232 | 2.1131 | 0.5834 | | 0.3927 | 154.9 | 1240 | 2.1134 | 0.5846 | | 0.3927 | 155.9 | 1248 | 2.1173 | 0.5846 | | 0.4492 | 156.9 | 1256 | 2.0772 | 0.5801 | | 0.4313 | 157.9 | 1264 | 2.0309 | 0.5861 | | 0.4015 | 158.9 | 1272 | 2.0887 | 0.5819 | | 0.4268 | 159.9 | 1280 | 2.1812 | 0.5849 | | 0.4268 | 160.9 | 1288 | 2.1568 | 0.5881 | | 0.4496 | 161.9 | 1296 | 2.0805 | 0.5801 | | 0.4121 | 162.9 | 1304 | 2.0461 | 0.5872 | | 0.401 | 163.9 | 1312 | 2.0377 | 0.5864 | | 0.4192 | 164.9 | 1320 | 2.0183 | 0.5872 | | 0.4192 | 165.9 | 1328 | 2.0107 | 0.5855 | | 0.4466 | 166.9 | 1336 | 2.0528 | 0.5881 | | 0.3981 | 167.9 | 1344 | 2.0511 | 0.5878 | | 0.3967 | 168.9 | 1352 | 2.0374 | 0.5867 | | 0.4072 | 169.9 | 1360 | 2.0554 | 0.5867 | | 0.4072 | 170.9 | 1368 | 2.0388 | 0.5858 | | 0.4581 | 171.9 | 1376 | 2.0188 | 0.5914 | | 0.3937 | 172.9 | 1384 | 1.9999 | 0.5852 | | 0.4074 | 173.9 | 1392 | 1.9738 | 0.5840 | | 0.4085 | 174.9 | 1400 | 2.0090 | 0.5843 | | 0.4085 | 175.9 | 1408 | 1.9990 | 0.5864 | | 0.4224 | 176.9 | 1416 | 2.0391 | 0.5852 | | 0.4471 | 177.9 | 1424 | 2.0262 | 0.5855 | | 0.4233 | 178.9 | 1432 | 2.0621 | 0.5801 | | 0.409 | 179.9 | 1440 | 2.0486 | 0.5846 | | 0.409 | 180.9 | 1448 | 2.0508 | 0.5807 | | 0.4518 | 181.9 | 1456 | 2.0241 | 0.5887 | | 0.4077 | 182.9 | 1464 | 2.0169 | 0.5843 | | 0.4197 | 183.9 | 1472 | 2.0014 | 0.5896 | | 0.4237 | 184.9 | 1480 | 2.0189 | 0.5843 | | 0.4237 | 185.9 | 1488 | 2.0095 | 0.5867 | | 0.4394 | 186.9 | 1496 | 1.9993 | 0.5884 | | 0.4299 | 187.9 | 1504 | 2.0097 | 0.5899 | | 0.4198 | 188.9 | 1512 | 2.0049 | 0.5870 | | 0.4116 | 189.9 | 1520 | 1.9899 | 0.5875 | | 0.4116 | 190.9 | 1528 | 1.9814 | 0.5881 | | 0.445 | 191.9 | 1536 | 1.9820 | 0.5887 | | 0.4198 | 192.9 | 1544 | 1.9838 | 0.5881 | | 0.4065 | 193.9 | 1552 | 1.9849 | 0.5884 | | 0.3917 | 194.9 | 1560 | 1.9803 | 0.5867 | | 0.3917 | 195.9 | 1568 | 1.9777 | 0.5881 | | 0.4239 | 196.9 | 1576 | 1.9752 | 0.5875 | | 0.4183 | 197.9 | 1584 | 1.9766 | 0.5872 | | 0.3965 | 198.9 | 1592 | 1.9773 | 0.5872 | | 0.4144 | 199.9 | 1600 | 1.9781 | 0.5872 | ### Framework versions - Transformers 4.24.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
philschmid/pyannote-segmentation
philschmid
2022-11-08T17:15:47Z
1,078
8
pyannote-audio
[ "pyannote-audio", "pytorch", "pyannote", "pyannote-audio-model", "audio", "voice", "speech", "speaker", "speaker-segmentation", "voice-activity-detection", "overlapped-speech-detection", "resegmentation", "dataset:ami", "dataset:dihard", "dataset:voxconverse", "arxiv:2104.04045", "license:mit", "region:us" ]
voice-activity-detection
2022-11-08T17:13:14Z
--- tags: - pyannote - pyannote-audio - pyannote-audio-model - audio - voice - speech - speaker - speaker-segmentation - voice-activity-detection - overlapped-speech-detection - resegmentation datasets: - ami - dihard - voxconverse license: mit inference: false --- # 🎹 Speaker segmentation ![Example](example.png) Model from *[End-to-end speaker segmentation for overlap-aware resegmentation](http://arxiv.org/abs/2104.04045)*, by Hervé Bredin and Antoine Laurent. [Online demo](https://huggingface.co/spaces/pyannote/pretrained-pipelines) is available as a Hugging Face Space. ## Support For commercial enquiries and scientific consulting, please contact [me](mailto:[email protected]). For [technical questions](https://github.com/pyannote/pyannote-audio/discussions) and [bug reports](https://github.com/pyannote/pyannote-audio/issues), please check [pyannote.audio](https://github.com/pyannote/pyannote-audio) Github repository. ## Usage Relies on pyannote.audio 2.0 currently in development: see [installation instructions](https://github.com/pyannote/pyannote-audio/tree/develop#installation). ### Voice activity detection ```python from pyannote.audio.pipelines import VoiceActivityDetection pipeline = VoiceActivityDetection(segmentation="pyannote/segmentation") HYPER_PARAMETERS = { # onset/offset activation thresholds "onset": 0.5, "offset": 0.5, # remove speech regions shorter than that many seconds. "min_duration_on": 0.0, # fill non-speech regions shorter than that many seconds. "min_duration_off": 0.0 } pipeline.instantiate(HYPER_PARAMETERS) vad = pipeline("audio.wav") # `vad` is a pyannote.core.Annotation instance containing speech regions ``` ### Overlapped speech detection ```python from pyannote.audio.pipelines import OverlappedSpeechDetection pipeline = OverlappedSpeechDetection(segmentation="pyannote/segmentation") pipeline.instantiate(HYPER_PARAMETERS) osd = pipeline("audio.wav") # `osd` is a pyannote.core.Annotation instance containing overlapped speech regions ``` ### Resegmentation ```python from pyannote.audio.pipelines import Resegmentation pipeline = Resegmentation(segmentation="pyannote/segmentation", diarization="baseline") pipeline.instantiate(HYPER_PARAMETERS) resegmented_baseline = pipeline({"audio": "audio.wav", "baseline": baseline}) # where `baseline` should be provided as a pyannote.core.Annotation instance ``` ### Raw scores ```python from pyannote.audio import Inference inference = Inference("pyannote/segmentation") segmentation = inference("audio.wav") # `segmentation` is a pyannote.core.SlidingWindowFeature # instance containing raw segmentation scores like the # one pictured above (output) ``` ## Reproducible research In order to reproduce the results of the paper ["End-to-end speaker segmentation for overlap-aware resegmentation "](https://arxiv.org/abs/2104.04045), use `pyannote/segmentation@Interspeech2021` with the following hyper-parameters: | Voice activity detection | `onset` | `offset` | `min_duration_on` | `min_duration_off` | | ------------------------ | ------- | -------- | ----------------- | ------------------ | | AMI Mix-Headset | 0.684 | 0.577 | 0.181 | 0.037 | | DIHARD3 | 0.767 | 0.377 | 0.136 | 0.067 | | VoxConverse | 0.767 | 0.713 | 0.182 | 0.501 | | Overlapped speech detection | `onset` | `offset` | `min_duration_on` | `min_duration_off` | | --------------------------- | ------- | -------- | ----------------- | ------------------ | | AMI Mix-Headset | 0.448 | 0.362 | 0.116 | 0.187 | | DIHARD3 | 0.430 | 0.320 | 0.091 | 0.144 | | VoxConverse | 0.587 | 0.426 | 0.337 | 0.112 | | Resegmentation of VBx | `onset` | `offset` | `min_duration_on` | `min_duration_off` | | --------------------- | ------- | -------- | ----------------- | ------------------ | | AMI Mix-Headset | 0.542 | 0.527 | 0.044 | 0.705 | | DIHARD3 | 0.592 | 0.489 | 0.163 | 0.182 | | VoxConverse | 0.537 | 0.724 | 0.410 | 0.563 | Expected outputs (and VBx baseline) are also provided in the `/reproducible_research` sub-directories. ## Citation ```bibtex @inproceedings{Bredin2021, Title = {{End-to-end speaker segmentation for overlap-aware resegmentation}}, Author = {{Bredin}, Herv{\'e} and {Laurent}, Antoine}, Booktitle = {Proc. Interspeech 2021}, Address = {Brno, Czech Republic}, Month = {August}, Year = {2021}, ``` ```bibtex @inproceedings{Bredin2020, Title = {{pyannote.audio: neural building blocks for speaker diarization}}, Author = {{Bredin}, Herv{\'e} and {Yin}, Ruiqing and {Coria}, Juan Manuel and {Gelly}, Gregory and {Korshunov}, Pavel and {Lavechin}, Marvin and {Fustes}, Diego and {Titeux}, Hadrien and {Bouaziz}, Wassim and {Gill}, Marie-Philippe}, Booktitle = {ICASSP 2020, IEEE International Conference on Acoustics, Speech, and Signal Processing}, Address = {Barcelona, Spain}, Month = {May}, Year = {2020}, } ```
aorhan/ddpm-butterflies-128
aorhan
2022-11-08T17:09:51Z
2
0
diffusers
[ "diffusers", "tensorboard", "en", "dataset:imagefolder", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
null
2022-11-08T16:38:49Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: imagefolder metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `imagefolder` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/aorhan/ddpm-butterflies-128/tensorboard?#scalars)
PaulaAlfy/xlm-roberta-base-finetuned-panx-all
PaulaAlfy
2022-11-08T16:56:34Z
108
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-11-08T16:22:18Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1528 - F1: 0.8734 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2634 | 1.0 | 525 | 0.1602 | 0.8258 | | 0.1316 | 2.0 | 1050 | 0.1454 | 0.8471 | | 0.089 | 3.0 | 1575 | 0.1430 | 0.8555 | | 0.0596 | 4.0 | 2100 | 0.1430 | 0.8676 | | 0.0393 | 5.0 | 2625 | 0.1528 | 0.8734 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
lgris/whisper-tiny-cv11-pt
lgris
2022-11-08T16:50:26Z
108
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "pt", "dataset:mozilla-foundation/common_voice_11_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-06T23:14:19Z
--- language: - pt license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Tiny PT with Common Voice 11 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 args: 'config: pt, split: test' metrics: - name: Wer type: wer value: 33.24473522796974 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Tiny PT with Common Voice 11 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.5205 - Wer: 33.2447 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 16000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 0.3154 | 0.44 | 1000 | 0.4987 | 36.2196 | | 0.3252 | 0.88 | 2000 | 0.4586 | 33.6213 | | 0.1989 | 1.32 | 3000 | 0.4457 | 32.7455 | | 0.3112 | 1.76 | 4000 | 0.4356 | 31.4097 | | 0.1329 | 2.2 | 5000 | 0.4348 | 31.1559 | | 0.1193 | 2.64 | 6000 | 0.4343 | 31.4046 | | 0.0723 | 3.07 | 7000 | 0.4424 | 31.5869 | | 0.0698 | 3.51 | 8000 | 0.4497 | 32.0827 | | 0.0865 | 3.95 | 9000 | 0.4497 | 31.0945 | | 0.0522 | 4.39 | 10000 | 0.4716 | 32.2190 | | 0.0542 | 4.83 | 11000 | 0.4761 | 32.6944 | | 0.061 | 5.27 | 12000 | 0.4983 | 32.0691 | | 0.0459 | 5.71 | 13000 | 0.4985 | 32.4968 | | 0.0338 | 6.15 | 14000 | 0.5123 | 33.3129 | | 0.0492 | 6.59 | 15000 | 0.5217 | 33.2686 | | 0.0194 | 7.03 | 16000 | 0.5205 | 33.2447 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.6.1 - Tokenizers 0.13.1
harmonai/glitch-440k
harmonai
2022-11-08T16:43:13Z
7
1
diffusers
[ "diffusers", "audio-generation", "license:mit", "diffusers:DanceDiffusionPipeline", "region:us" ]
null
2022-10-20T12:19:52Z
--- license: mit tags: - audio-generation --- [Dance Diffusion](https://github.com/Harmonai-org/sample-generator) is now available in 🧨 Diffusers. ## FP32 ```python # !pip install diffusers[torch] accelerate scipy from diffusers import DiffusionPipeline from scipy.io.wavfile import write model_id = "harmonai/glitch-440k" pipe = DiffusionPipeline.from_pretrained(model_id) pipe = pipe.to("cuda") audios = pipe(audio_length_in_s=4.0).audios # To save locally for i, audio in enumerate(audios): write(f"test_{i}.wav", pipe.unet.sample_rate, audio.transpose()) # To dislay in google colab import IPython.display as ipd for audio in audios: display(ipd.Audio(audio, rate=pipe.unet.sample_rate)) ``` ## FP16 Faster at a small loss of quality ```python # !pip install diffusers[torch] accelerate scipy from diffusers import DiffusionPipeline from scipy.io.wavfile import write import torch model_id = "harmonai/glitch-440k" pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") audios = pipeline(audio_length_in_s=4.0).audios # To save locally for i, audio in enumerate(audios): write(f"{i}.wav", pipe.unet.sample_rate, audio.transpose()) # To dislay in google colab import IPython.display as ipd for audio in audios: display(ipd.Audio(audio, rate=pipe.unet.sample_rate)) ```
harmonai/jmann-small-190k
harmonai
2022-11-08T16:42:12Z
8
3
diffusers
[ "diffusers", "audio-generation", "license:mit", "diffusers:DanceDiffusionPipeline", "region:us" ]
null
2022-10-20T12:20:32Z
--- license: mit tags: - audio-generation --- [Dance Diffusion](https://github.com/Harmonai-org/sample-generator) is now available in 🧨 Diffusers. ## FP32 ```python # !pip install diffusers[torch] accelerate scipy from diffusers import DiffusionPipeline from scipy.io.wavfile import write model_id = "harmonai/jmann-small-190k" pipe = DiffusionPipeline.from_pretrained(model_id) pipe = pipe.to("cuda") audios = pipe(audio_length_in_s=4.0).audios # To save locally for i, audio in enumerate(audios): write(f"test_{i}.wav", pipe.unet.sample_rate, audio.transpose()) # To dislay in google colab import IPython.display as ipd for audio in audios: display(ipd.Audio(audio, rate=pipe.unet.sample_rate)) ``` ## FP16 Faster at a small loss of quality ```python # !pip install diffusers[torch] accelerate scipy from diffusers import DiffusionPipeline from scipy.io.wavfile import write import torch model_id = "harmonai/jmann-small-190k" pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") audios = pipeline(audio_length_in_s=4.0).audios # To save locally for i, audio in enumerate(audios): write(f"{i}.wav", pipe.unet.sample_rate, audio.transpose()) # To dislay in google colab import IPython.display as ipd for audio in audios: display(ipd.Audio(audio, rate=pipe.unet.sample_rate)) ```
bigmorning/whisper_end22
bigmorning
2022-11-08T15:15:27Z
62
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-08T15:15:16Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: whisper_end22 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_end22 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1061 - Train Accuracy: 0.0341 - Validation Loss: 0.5635 - Validation Accuracy: 0.0314 - Epoch: 22 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 5.0856 | 0.0116 | 4.4440 | 0.0123 | 0 | | 4.3149 | 0.0131 | 4.0521 | 0.0142 | 1 | | 3.9260 | 0.0146 | 3.7264 | 0.0153 | 2 | | 3.5418 | 0.0160 | 3.3026 | 0.0174 | 3 | | 2.7510 | 0.0198 | 2.0157 | 0.0241 | 4 | | 1.6782 | 0.0250 | 1.3567 | 0.0273 | 5 | | 1.1705 | 0.0274 | 1.0678 | 0.0286 | 6 | | 0.9126 | 0.0287 | 0.9152 | 0.0294 | 7 | | 0.7514 | 0.0296 | 0.8057 | 0.0299 | 8 | | 0.6371 | 0.0302 | 0.7409 | 0.0302 | 9 | | 0.5498 | 0.0307 | 0.6854 | 0.0306 | 10 | | 0.4804 | 0.0312 | 0.6518 | 0.0307 | 11 | | 0.4214 | 0.0316 | 0.6200 | 0.0310 | 12 | | 0.3713 | 0.0319 | 0.5947 | 0.0311 | 13 | | 0.3281 | 0.0322 | 0.5841 | 0.0311 | 14 | | 0.2891 | 0.0325 | 0.5700 | 0.0313 | 15 | | 0.2550 | 0.0328 | 0.5614 | 0.0313 | 16 | | 0.2237 | 0.0331 | 0.5572 | 0.0313 | 17 | | 0.1959 | 0.0333 | 0.5563 | 0.0314 | 18 | | 0.1698 | 0.0335 | 0.5530 | 0.0314 | 19 | | 0.1455 | 0.0337 | 0.5590 | 0.0314 | 20 | | 0.1242 | 0.0339 | 0.5743 | 0.0313 | 21 | | 0.1061 | 0.0341 | 0.5635 | 0.0314 | 22 | ### Framework versions - Transformers 4.25.0.dev0 - TensorFlow 2.9.2 - Tokenizers 0.13.2
bigmorning/whisper_0015
bigmorning
2022-11-08T14:43:13Z
32
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-08T14:42:41Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: whisper_0015 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_0015 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3281 - Train Accuracy: 0.0322 - Validation Loss: 0.5841 - Validation Accuracy: 0.0311 - Epoch: 14 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 5.0856 | 0.0116 | 4.4440 | 0.0123 | 0 | | 4.3149 | 0.0131 | 4.0521 | 0.0142 | 1 | | 3.9260 | 0.0146 | 3.7264 | 0.0153 | 2 | | 3.5418 | 0.0160 | 3.3026 | 0.0174 | 3 | | 2.7510 | 0.0198 | 2.0157 | 0.0241 | 4 | | 1.6782 | 0.0250 | 1.3567 | 0.0273 | 5 | | 1.1705 | 0.0274 | 1.0678 | 0.0286 | 6 | | 0.9126 | 0.0287 | 0.9152 | 0.0294 | 7 | | 0.7514 | 0.0296 | 0.8057 | 0.0299 | 8 | | 0.6371 | 0.0302 | 0.7409 | 0.0302 | 9 | | 0.5498 | 0.0307 | 0.6854 | 0.0306 | 10 | | 0.4804 | 0.0312 | 0.6518 | 0.0307 | 11 | | 0.4214 | 0.0316 | 0.6200 | 0.0310 | 12 | | 0.3713 | 0.0319 | 0.5947 | 0.0311 | 13 | | 0.3281 | 0.0322 | 0.5841 | 0.0311 | 14 | ### Framework versions - Transformers 4.25.0.dev0 - TensorFlow 2.9.2 - Tokenizers 0.13.2
rosamondthalken/t5-base-sci-names
rosamondthalken
2022-11-08T14:39:36Z
8
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "scientific names", "text generation", "en", "license:cc-by-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-09-16T15:00:05Z
--- language: - en tags: - scientific names - text generation license: cc-by-sa-4.0 --- # t5-base-sci-names Biodiversity literature is dedicated to the identification, documentation, and categorization of plants, fungi, animals, and other living organisms. Correctly extracting the name of an organism within these documents involves finding the entire scientific name–including the genus, specific epithet, and author name. Extracting these names allows biologists to access documents about a species more comprehensively, and to track an organism’s history of documentation, which includes biological changes and changes in how scientists describe them. **t5-base-sci-names** uses advances in text-to-text generation to generate scientific names and authors from biodiversity literature. This model was trained on hand-labeled biodiversity texts, including labeled information about a mentioned organism's genus (abbreviated and expanded), specific epithet, and author. This model was trained to output 0-N scientific names with specific prefixes (e.g. "genus = " or "epithet = ") and performs best with anywhere from 20-120 words. You can also use the model in this tutorial for [scientific names generation](https://colab.research.google.com/drive/1GEpnCaMJYiPIhuZiDJ1X1pZsGtGSm8Ds?usp=sharing). Thanks to Damon Little and Nelson Salinas at the New York Botanical Gardens for their support. *Note that this model is still a work in progress. Any feedback is welcome.*
ashish23993/t5-small-finetuned-xsum-ashish-5000
ashish23993
2022-11-08T13:53:48Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-11-08T10:37:19Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: t5-small-finetuned-xsum-ashish-5000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum-ashish-5000 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.6200 - Rouge1: 14.8258 - Rouge2: 4.7741 - Rougel: 11.3583 - Rougelsum: 13.2147 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | No log | 1.0 | 313 | 2.8460 | 13.7208 | 4.2759 | 10.4447 | 12.1604 | 19.0 | | 2.8939 | 2.0 | 626 | 2.7686 | 14.0884 | 4.4571 | 10.8946 | 12.6399 | 19.0 | | 2.8939 | 3.0 | 939 | 2.7323 | 14.249 | 4.4839 | 10.9701 | 12.7336 | 19.0 | | 2.6857 | 4.0 | 1252 | 2.7140 | 14.4123 | 4.5447 | 11.09 | 12.8468 | 19.0 | | 2.6353 | 5.0 | 1565 | 2.6962 | 14.4931 | 4.6524 | 11.1552 | 12.9235 | 19.0 | | 2.6353 | 6.0 | 1878 | 2.6827 | 14.6765 | 4.6571 | 11.2099 | 13.0457 | 19.0 | | 2.6005 | 7.0 | 2191 | 2.6743 | 14.6923 | 4.6506 | 11.1972 | 13.0305 | 19.0 | | 2.5721 | 8.0 | 2504 | 2.6691 | 14.8242 | 4.7211 | 11.2794 | 13.1706 | 19.0 | | 2.5721 | 9.0 | 2817 | 2.6598 | 14.9018 | 4.7961 | 11.3472 | 13.2632 | 19.0 | | 2.5526 | 10.0 | 3130 | 2.6559 | 14.8855 | 4.8159 | 11.3402 | 13.2578 | 19.0 | | 2.5526 | 11.0 | 3443 | 2.6533 | 14.8022 | 4.7367 | 11.2253 | 13.1308 | 19.0 | | 2.5352 | 12.0 | 3756 | 2.6490 | 14.7306 | 4.6719 | 11.158 | 13.1083 | 19.0 | | 2.5238 | 13.0 | 4069 | 2.6460 | 14.7908 | 4.6958 | 11.2061 | 13.1103 | 19.0 | | 2.5238 | 14.0 | 4382 | 2.6436 | 14.7332 | 4.7132 | 11.1581 | 13.0709 | 19.0 | | 2.5067 | 15.0 | 4695 | 2.6403 | 14.7062 | 4.7363 | 11.1275 | 13.0921 | 19.0 | | 2.4922 | 16.0 | 5008 | 2.6382 | 14.735 | 4.6939 | 11.1301 | 13.0941 | 19.0 | | 2.4922 | 17.0 | 5321 | 2.6353 | 14.8166 | 4.7615 | 11.2635 | 13.1526 | 19.0 | | 2.4841 | 18.0 | 5634 | 2.6334 | 14.8517 | 4.8063 | 11.2705 | 13.1878 | 19.0 | | 2.4841 | 19.0 | 5947 | 2.6306 | 14.7038 | 4.6747 | 11.1493 | 13.0818 | 19.0 | | 2.4789 | 20.0 | 6260 | 2.6312 | 14.8127 | 4.7543 | 11.2775 | 13.1812 | 19.0 | | 2.4644 | 21.0 | 6573 | 2.6285 | 14.7922 | 4.7114 | 11.2655 | 13.1716 | 19.0 | | 2.4644 | 22.0 | 6886 | 2.6270 | 14.8587 | 4.78 | 11.3163 | 13.2017 | 19.0 | | 2.4506 | 23.0 | 7199 | 2.6264 | 14.7304 | 4.6852 | 11.2138 | 13.1306 | 19.0 | | 2.4595 | 24.0 | 7512 | 2.6258 | 14.7294 | 4.6597 | 11.2354 | 13.1126 | 19.0 | | 2.4595 | 25.0 | 7825 | 2.6257 | 14.6318 | 4.6467 | 11.1913 | 13.0587 | 19.0 | | 2.4523 | 26.0 | 8138 | 2.6250 | 14.7609 | 4.7037 | 11.2777 | 13.1711 | 19.0 | | 2.4523 | 27.0 | 8451 | 2.6231 | 14.7342 | 4.7566 | 11.2569 | 13.1351 | 19.0 | | 2.4317 | 28.0 | 8764 | 2.6223 | 14.725 | 4.7248 | 11.247 | 13.1234 | 19.0 | | 2.4374 | 29.0 | 9077 | 2.6231 | 14.6911 | 4.7196 | 11.2372 | 13.0854 | 19.0 | | 2.4374 | 30.0 | 9390 | 2.6234 | 14.6889 | 4.7202 | 11.2565 | 13.1003 | 19.0 | | 2.4323 | 31.0 | 9703 | 2.6222 | 14.7264 | 4.7543 | 11.2752 | 13.1442 | 19.0 | | 2.4295 | 32.0 | 10016 | 2.6215 | 14.7613 | 4.723 | 11.2632 | 13.1389 | 19.0 | | 2.4295 | 33.0 | 10329 | 2.6212 | 14.7716 | 4.7676 | 11.3014 | 13.1637 | 19.0 | | 2.4282 | 34.0 | 10642 | 2.6211 | 14.7547 | 4.7437 | 11.296 | 13.1552 | 19.0 | | 2.4282 | 35.0 | 10955 | 2.6203 | 14.7717 | 4.7502 | 11.2999 | 13.1498 | 19.0 | | 2.4265 | 36.0 | 11268 | 2.6208 | 14.7952 | 4.7795 | 11.3294 | 13.1866 | 19.0 | | 2.4145 | 37.0 | 11581 | 2.6203 | 14.8122 | 4.7814 | 11.3385 | 13.1882 | 19.0 | | 2.4145 | 38.0 | 11894 | 2.6202 | 14.8281 | 4.7798 | 11.3381 | 13.2065 | 19.0 | | 2.4241 | 39.0 | 12207 | 2.6202 | 14.8163 | 4.7801 | 11.3492 | 13.2034 | 19.0 | | 2.4163 | 40.0 | 12520 | 2.6200 | 14.8258 | 4.7741 | 11.3583 | 13.2147 | 19.0 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
google/ddpm-church-256
google
2022-11-08T13:41:58Z
850
9
diffusers
[ "diffusers", "pytorch", "unconditional-image-generation", "arxiv:2006.11239", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2022-07-19T10:42:51Z
--- license: apache-2.0 tags: - pytorch - diffusers - unconditional-image-generation --- # Denoising Diffusion Probabilistic Models (DDPM) **Paper**: [Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239) **Authors**: Jonathan Ho, Ajay Jain, Pieter Abbeel **Abstract**: *We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN.* ## Inference **DDPM** models can use *discrete noise schedulers* such as: - [scheduling_ddpm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddpm.py) - [scheduling_ddim](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim.py) - [scheduling_pndm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_pndm.py) for inference. Note that while the *ddpm* scheduler yields the highest quality, it also takes the longest. For a good trade-off between quality and inference speed you might want to consider the *ddim* or *pndm* schedulers instead. See the following code: ```python # !pip install diffusers from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline model_id = "google/ddpm-church-256" # load model and scheduler ddpm = DDPMPipeline.from_pretrained(model_id) # you can replace DDPMPipeline with DDIMPipeline or PNDMPipeline for faster inference # run pipeline in inference (sample random noise and denoise) image = ddpm().images[0] # save image image.save("ddpm_generated_image.png") ``` For more in-detail information, please have a look at the [official inference example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb) ## Training If you want to train your own model, please have a look at the [official training example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb) ## Samples 1. ![sample_1](https://huggingface.co/google/ddpm-church-256/resolve/main/images/generated_image_0.png) 2. ![sample_2](https://huggingface.co/google/ddpm-church-256/resolve/main/images/generated_image_1.png) 3. ![sample_3](https://huggingface.co/google/ddpm-church-256/resolve/main/images/generated_image_2.png) 4. ![sample_4](https://huggingface.co/google/ddpm-church-256/resolve/main/images/generated_image_3.png)
google/ddpm-bedroom-256
google
2022-11-08T13:41:35Z
626
4
diffusers
[ "diffusers", "pytorch", "unconditional-image-generation", "arxiv:2006.11239", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2022-07-19T10:43:04Z
--- license: apache-2.0 tags: - pytorch - diffusers - unconditional-image-generation --- # Denoising Diffusion Probabilistic Models (DDPM) **Paper**: [Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239) **Authors**: Jonathan Ho, Ajay Jain, Pieter Abbeel **Abstract**: *We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN.* ## Inference **DDPM** models can use *discrete noise schedulers* such as: - [scheduling_ddpm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddpm.py) - [scheduling_ddim](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim.py) - [scheduling_pndm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_pndm.py) for inference. Note that while the *ddpm* scheduler yields the highest quality, it also takes the longest. For a good trade-off between quality and inference speed you might want to consider the *ddim* or *pndm* schedulers instead. See the following code: ```python # !pip install diffusers from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline model_id = "google/ddpm-bedroom-256" # load model and scheduler ddpm = DDPMPipeline.from_pretrained(model_id) # you can replace DDPMPipeline with DDIMPipeline or PNDMPipeline for faster inference # run pipeline in inference (sample random noise and denoise) image = ddpm().images[0] # save image image.save("ddpm_generated_image.png") ``` For more in-detail information, please have a look at the [official inference example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb) ## Training If you want to train your own model, please have a look at the [official training example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb) ## Samples 1. ![sample_1](https://huggingface.co/google/ddpm-bedroom-256/resolve/main/images/generated_image_0.png) 2. ![sample_2](https://huggingface.co/google/ddpm-bedroom-256/resolve/main/images/generated_image_1.png) 3. ![sample_3](https://huggingface.co/google/ddpm-bedroom-256/resolve/main/images/generated_image_2.png) 4. ![sample_4](https://huggingface.co/google/ddpm-bedroom-256/resolve/main/images/generated_image_3.png)
lewtun/my-awesome-setfit-model-98
lewtun
2022-11-08T13:23:40Z
4
1
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "setfit", "transformers", "text-classification", "region:us" ]
text-classification
2022-10-18T08:50:35Z
--- pipeline_tag: text-classification tags: - sentence-transformers - setfit - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 40 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 40, "warmup_steps": 4, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Bingsu/clip-vit-base-patch32-ko
Bingsu
2022-11-08T11:02:10Z
2,198
5
transformers
[ "transformers", "pytorch", "tf", "safetensors", "clip", "zero-shot-image-classification", "ko", "arxiv:2004.09813", "doi:10.57967/hf/1615", "license:mit", "endpoints_compatible", "region:us" ]
zero-shot-image-classification
2022-09-16T05:18:05Z
--- widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png candidate_labels: 기타치는 고양이, 피아노 치는 강아지 example_title: Guitar, cat and dog language: ko license: mit --- # clip-vit-base-patch32-ko Korean CLIP model trained by [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813) [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813)로 학습된 한국어 CLIP 모델입니다. 훈련 코드: <https://github.com/Bing-su/KoCLIP_training_code> 사용된 데이터: AIHUB에 있는 모든 한국어-영어 병렬 데이터 ## How to Use #### 1. ```python import requests import torch from PIL import Image from transformers import AutoModel, AutoProcessor repo = "Bingsu/clip-vit-base-patch32-ko" model = AutoModel.from_pretrained(repo) processor = AutoProcessor.from_pretrained(repo) url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(text=["고양이 두 마리", "개 두 마리"], images=image, return_tensors="pt", padding=True) with torch.inference_mode(): outputs = model(**inputs) logits_per_image = outputs.logits_per_image probs = logits_per_image.softmax(dim=1) ``` ```python >>> probs tensor([[0.9926, 0.0074]]) ``` #### 2. ```python from transformers import pipeline repo = "Bingsu/clip-vit-base-patch32-ko" pipe = pipeline("zero-shot-image-classification", model=repo) url = "http://images.cocodataset.org/val2017/000000039769.jpg" result = pipe(images=url, candidate_labels=["고양이 한 마리", "고양이 두 마리", "분홍색 소파에 드러누운 고양이 친구들"], hypothesis_template="{}") ``` ```python >>> result [{'score': 0.9456236958503723, 'label': '분홍색 소파에 드러누운 고양이 친구들'}, {'score': 0.05315302312374115, 'label': '고양이 두 마리'}, {'score': 0.0012233294546604156, 'label': '고양이 한 마리'}] ``` ## Tokenizer 토크나이저는 한국어 데이터와 영어 데이터를 7:3 비율로 섞어, 원본 CLIP 토크나이저에서 `.train_new_from_iterator`를 통해 학습되었습니다. https://github.com/huggingface/transformers/blob/bc21aaca789f1a366c05e8b5e111632944886393/src/transformers/models/clip/modeling_clip.py#L661-L666 ```python # text_embeds.shape = [batch_size, sequence_length, transformer.width] # take features from the eot embedding (eot_token is the highest number in each sequence) # casting to torch.int for onnx compatibility: argmax doesn't support int64 inputs with opset 14 pooled_output = last_hidden_state[ torch.arange(last_hidden_state.shape[0]), input_ids.to(torch.int).argmax(dim=-1) ] ``` CLIP 모델은 `pooled_output`을 구할때 id가 가장 큰 토큰을 사용하기 때문에, eos 토큰은 가장 마지막 토큰이 되어야 합니다.
bthomas/setfit_bench_bert-base-uncased_finetuned_for_seqclassif
bthomas
2022-11-08T10:23:54Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "SetFitbench", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-08T10:20:10Z
--- license: apache-2.0 tags: - SetFitbench - generated_from_trainer model-index: - name: setfit_bench_bert-base-uncased_finetuned_for_seqclassif results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # setfit_bench_bert-base-uncased_finetuned_for_seqclassif This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2666 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.3437 | 1.0 | 189 | 0.2666 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.11.0 - Datasets 2.3.2 - Tokenizers 0.12.1
troesy/hateBERT_3epoch
troesy
2022-11-08T10:21:20Z
16
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-11-08T10:07:36Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: hateBERT_3epoch results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hateBERT_3epoch This model is a fine-tuned version of [GroNLP/hateBERT](https://huggingface.co/GroNLP/hateBERT) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2174 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 - Accuracy: 0.9174 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:| | No log | 1.0 | 174 | 0.2301 | 0.0 | 0.0 | 0.0 | 0.9112 | | No log | 2.0 | 348 | 0.2192 | 0.0 | 0.0 | 0.0 | 0.9148 | | 0.2311 | 3.0 | 522 | 0.2174 | 0.0 | 0.0 | 0.0 | 0.9174 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
nlplab130/distilbert-base-uncased-finetuned-squad
nlplab130
2022-11-08T09:27:46Z
108
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-11-08T06:39:59Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1455 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2056 | 1.0 | 5533 | 1.1415 | | 0.949 | 2.0 | 11066 | 1.1144 | | 0.7471 | 3.0 | 16599 | 1.1455 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
tkubotake/xlm-roberta-base-finetuned-panx-all
tkubotake
2022-11-08T09:07:09Z
6
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-11-07T03:46:39Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [tkubotake/xlm-roberta-base-finetuned-panx-de](https://huggingface.co/tkubotake/xlm-roberta-base-finetuned-panx-de) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2290 - F1: 0.8629 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1259 | 1.0 | 835 | 0.1879 | 0.8478 | | 0.078 | 2.0 | 1670 | 0.2121 | 0.8582 | | 0.0439 | 3.0 | 2505 | 0.2290 | 0.8629 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
Enthusiastic/Stars
Enthusiastic
2022-11-08T08:44:02Z
28
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-11-08T08:43:47Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: Stars results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.5135135054588318 --- # Stars Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### Andromeda ![Andromeda ](images/Andromeda_.jpg) #### Cassiopeia ![Cassiopeia ](images/Cassiopeia_.jpg) #### Hercules ![Hercules](images/Hercules.jpg) #### Orion ![Orion](images/Orion.jpg) #### Perseus ![Perseus](images/Perseus.jpg)
tkubotake/xlm-roberta-base-finetuned-panx-de-fr
tkubotake
2022-11-08T08:23:00Z
6
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-11-07T01:53:41Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [tkubotake/xlm-roberta-base-finetuned-panx-de](https://huggingface.co/tkubotake/xlm-roberta-base-finetuned-panx-de) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1829 - F1: 0.8671 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.158 | 1.0 | 715 | 0.1689 | 0.8471 | | 0.099 | 2.0 | 1430 | 0.1781 | 0.8576 | | 0.0599 | 3.0 | 2145 | 0.1829 | 0.8671 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
bguan/Reinforce-Pixelcopter-PLE-v0
bguan
2022-11-08T07:20:11Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-11-08T07:20:03Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 16.00 +/- 11.92 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
GuiGel/beto-uncased-flert-context-we-lstm-crf-meddocan
GuiGel
2022-11-08T07:19:25Z
6
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "region:us" ]
token-classification
2022-11-08T07:16:36Z
--- tags: - flair - token-classification - sequence-tagger-model --- ### Demo: How to use in Flair Requires: - **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("GuiGel/beto-uncased-flert-context-we-lstm-crf-meddocan") # make example sentence sentence = Sentence("On September 1st George won 1 dollar while watching Game of Thrones.") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ```
sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset
sayakpaul
2022-11-08T06:26:47Z
24
3
transformers
[ "transformers", "pytorch", "tensorboard", "videomae", "video-classification", "generated_from_trainer", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2022-11-07T08:54:43Z
--- license: cc-by-nc-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-kinetics-finetuned-ucf101-subset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-kinetics-finetuned-ucf101-subset This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0666 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 111 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9274 | 0.34 | 38 | 0.3271 | 0.9786 | | 0.0887 | 1.34 | 76 | 0.0668 | 1.0 | | 0.0267 | 2.32 | 111 | 0.0471 | 1.0 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
bguan/Reinforce-CartPole-v1
bguan
2022-11-08T05:55:49Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-11-08T02:43:41Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 440.00 +/- 88.54 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
xu1998hz/sescore_english_mt
xu1998hz
2022-11-08T05:16:19Z
0
1
null
[ "region:us" ]
null
2022-11-05T01:44:33Z
SEScore English checkpoint for Machine Translation
kit-nlp/yacis-electra-small-japanese-irony
kit-nlp
2022-11-08T04:16:30Z
5
0
transformers
[ "transformers", "pytorch", "electra", "text-classification", "ja", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-07T07:05:34Z
--- language: ja license: cc-by-sa-4.0 --- # YACIS ELECTRA Small Japanese for Irony This is an [ELECTRA](https://github.com/google-research/electra) Base model for the Japanese language finetuned for automatic irony detection. The model was based on [YACIS ELECTRA small Japanese](https://huggingface.co/ptaszynski/yacis-electra-small-japanese), and later finetuned on a dataset containing ironic and sarcastic tweets. ## Licenses The finetuned model with all attached files is licensed under [CC BY-SA 4.0](http://creativecommons.org/licenses/by-sa/4.0/), or Creative Commons Attribution-ShareAlike 4.0 International License. <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a> ## Citations Please, cite this model using the following citation. ``` @inproceedings{dan2022yaciselectra-small-irony, title={北見工業大学 テキスト情報処理研究室 ELECTRA Base 皮肉検出モデル (Izumi Labs ver.)}, author={団 俊輔 and プタシンスキ ミハウ and ジェプカ ラファウ and 桝井 文人}, publisher={HuggingFace}, year={2022}, url = "https://huggingface.co/kit-nlp/yacis-electra-small-japanese-irony" } ```
bigmorning/bigmorning_whisper
bigmorning
2022-11-08T03:44:46Z
61
0
transformers
[ "transformers", "tf", "whisper", "automatic-speech-recognition", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-08T03:13:01Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: bigmorning_whisper results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # bigmorning_whisper This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.25.0.dev0 - TensorFlow 2.9.2 - Datasets 2.6.1 - Tokenizers 0.13.2
dhshin/ddpm-butterflies-128
dhshin
2022-11-08T03:16:58Z
3
0
diffusers
[ "diffusers", "tensorboard", "en", "dataset:huggan/smithsonian_butterflies_subset", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
null
2022-10-25T01:06:18Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/smithsonian_butterflies_subset metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/dhshin/ddpm-butterflies-128/tensorboard?#scalars)
BigSalmon/InformalToFormalLincoln90Paraphrase
BigSalmon
2022-11-08T03:06:10Z
163
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-08T02:37:14Z
data: https://github.com/BigSalmon2/InformalToFormalDataset Text Generation Informal Formal ``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln90Paraphrase") model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln90Paraphrase") ``` ``` Demo: https://huggingface.co/spaces/BigSalmon/FormalInformalConciseWordy ``` ``` prompt = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln:""" input_ids = tokenizer.encode(prompt, return_tensors='pt') outputs = model.generate(input_ids=input_ids, max_length=10 + len(prompt), temperature=1.0, top_k=50, top_p=0.95, do_sample=True, num_return_sequences=5, early_stopping=True) for i in range(5): print(tokenizer.decode(outputs[i])) ``` Most likely outputs (Disclaimer: I highly recommend using this over just generating): ``` prompt = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln:""" text = tokenizer.encode(prompt) myinput, past_key_values = torch.tensor([text]), None myinput = myinput myinput= myinput.to(device) logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False) logits = logits[0,-1] probabilities = torch.nn.functional.softmax(logits) best_logits, best_indices = logits.topk(250) best_words = [tokenizer.decode([idx.item()]) for idx in best_indices] text.append(best_indices[0].item()) best_probabilities = probabilities[best_indices].tolist() words = [] print(best_words) ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. *** informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. *** informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ``` ``` original: microsoft word's [MASK] pricing invites competition. Translated into the Style of Abraham Lincoln: microsoft word's unconscionable pricing invites competition. *** original: the library’s quiet atmosphere encourages visitors to [blank] in their work. Translated into the Style of Abraham Lincoln: the library’s quiet atmosphere encourages visitors to immerse themselves in their work. ``` ``` Essay Intro (Warriors vs. Rockets in Game 7): text: eagerly anticipated by fans, game 7's are the highlight of the post-season. text: ever-building in suspense, game 7's have the crowd captivated. *** Essay Intro (South Korean TV Is Becoming Popular): text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ). text: increasingly held in critical esteem, south korean television continues to impress. text: at the forefront of quality content, south korea is quickly achieving celebrity status. *** Essay Intro ( ``` ``` Search: What is the definition of Checks and Balances? https://en.wikipedia.org/wiki/Checks_and_balances Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate. https://www.harvard.edu/glossary/Checks_and_Balances Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power https://www.law.cornell.edu/library/constitution/Checks_and_Balances Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power. *** Search: What is the definition of Separation of Powers? https://en.wikipedia.org/wiki/Separation_of_powers The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power. https://www.yale.edu/tcf/Separation_of_Powers.html Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined. *** Search: What is the definition of Connection of Powers? https://en.wikipedia.org/wiki/Connection_of_powers Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches. https://simple.wikipedia.org/wiki/Connection_of_powers The term Connection of Powers describes a system of government in which there is overlap between different parts of the government. *** Search: What is the definition of ``` ``` Search: What are phrase synonyms for "second-guess"? https://www.powerthesaurus.org/second-guess/synonyms Shortest to Longest: - feel dubious about - raise an eyebrow at - wrinkle their noses at - cast a jaundiced eye at - teeter on the fence about *** Search: What are phrase synonyms for "mean to newbies"? https://www.powerthesaurus.org/mean_to_newbies/synonyms Shortest to Longest: - readiness to balk at rookies - absence of tolerance for novices - hostile attitude toward newcomers *** Search: What are phrase synonyms for "make use of"? https://www.powerthesaurus.org/make_use_of/synonyms Shortest to Longest: - call upon - glean value from - reap benefits from - derive utility from - seize on the merits of - draw on the strength of - tap into the potential of *** Search: What are phrase synonyms for "hurting itself"? https://www.powerthesaurus.org/hurting_itself/synonyms Shortest to Longest: - erring - slighting itself - forfeiting its integrity - doing itself a disservice - evincing a lack of backbone *** Search: What are phrase synonyms for " ``` ``` - nebraska - unicamerical legislature - different from federal house and senate text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate. *** - penny has practically no value - should be taken out of circulation - just as other coins have been in us history - lost use - value not enough - to make environmental consequences worthy text: all but valueless, the penny should be retired. as with other coins in american history, it has become defunct. too minute to warrant the environmental consequences of its production, it has outlived its usefulness. *** - ``` ``` original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick. infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick. *** original: ``` ``` wordy: classical music is becoming less popular more and more. Translate into Concise Text: interest in classic music is fading. *** wordy: ``` ``` sweet: savvy voters ousted him. longer: voters who were informed delivered his defeat. *** sweet: ``` ``` 1: commercial space company spacex plans to launch a whopping 52 flights in 2022. 2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022. 3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights. 4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company. 5: a commercial space company, spacex aims to conduct 52 flights in 2022. *** 1: ``` Keywords to sentences or sentence. ``` ngos are characterized by: □ voluntary citizens' group that is organized on a local, national or international level □ encourage political participation □ often serve humanitarian functions □ work for social, economic, or environmental change *** what are the drawbacks of living near an airbnb? □ noise □ parking □ traffic □ security □ strangers *** ``` ``` original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung. adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung. *** original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark. adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark. *** original: ``` ``` original: had trouble deciding. translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation. *** original: ``` ``` input: not loyal 1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ). *** input: ``` ``` first: ( was complicit in / was involved in ). antonym: ( was blameless / was not an accomplice to / had no hand in / was uninvolved in ). *** first: ( have no qualms about / see no issue with ). antonym: ( are deeply troubled by / harbor grave reservations about / have a visceral aversion to / take ( umbrage at / exception to ) / are wary of ). *** first: ( do not see eye to eye / disagree often ). antonym: ( are in sync / are united / have excellent rapport / are like-minded / are in step / are of one mind / are in lockstep / operate in perfect harmony / march in lockstep ). *** first: ``` ``` stiff with competition, law school {A} is the launching pad for countless careers, {B} is a crowded field, {C} ranks among the most sought-after professional degrees, {D} is a professional proving ground. *** languishing in viewership, saturday night live {A} is due for a creative renaissance, {B} is no longer a ratings juggernaut, {C} has been eclipsed by its imitators, {C} can still find its mojo. *** dubbed the "manhattan of the south," atlanta {A} is a bustling metropolis, {B} is known for its vibrant downtown, {C} is a city of rich history, {D} is the pride of georgia. *** embattled by scandal, harvard {A} is feeling the heat, {B} cannot escape the media glare, {C} is facing its most intense scrutiny yet, {D} is in the spotlight for all the wrong reasons. ``` Infill / Infilling / Masking / Phrase Masking (Works pretty decently actually, especially when you use logprobs code from above): ``` his contention [blank] by the evidence [sep] was refuted [answer] *** few sights are as [blank] new york city as the colorful, flashing signage of its bodegas [sep] synonymous with [answer] *** when rick won the lottery, all of his distant relatives [blank] his winnings [sep] clamored for [answer] *** the library’s quiet atmosphere encourages visitors to [blank] in their work [sep] immerse themselves [answer] *** the joy of sport is that no two games are alike. for every exhilarating experience, however, there is an interminable one. the national pastime, unfortunately, has a penchant for the latter. what begins as a summer evening at the ballpark can quickly devolve into a game of tedium. the primary culprit is the [blank] of play. from batters readjusting their gloves to fielders spitting on their mitts, the action is [blank] unnecessary interruptions. the sport's future is [blank] if these tendencies are not addressed [sep] plodding pace [answer] riddled with [answer] bleak [answer] *** microsoft word's [blank] pricing [blank] competition [sep] unconscionable [answer] invites [answer] *** ``` ``` original: microsoft word's [MASK] pricing invites competition. Translated into the Style of Abraham Lincoln: microsoft word's unconscionable pricing invites competition. *** original: the library’s quiet atmosphere encourages visitors to [blank] in their work. Translated into the Style of Abraham Lincoln: the library’s quiet atmosphere encourages visitors to immerse themselves in their work. ``` Backwards ``` Essay Intro (National Parks): text: tourists are at ease in the national parks, ( swept up in the beauty of their natural splendor ). *** Essay Intro (D.C. Statehood): washington, d.c. is a city of outsize significance, ( ground zero for the nation's political life / center stage for the nation's political machinations ). ``` ``` topic: the Golden State Warriors. characterization 1: the reigning kings of the NBA. characterization 2: possessed of a remarkable cohesion. characterization 3: helmed by superstar Stephen Curry. characterization 4: perched atop the league’s hierarchy. characterization 5: boasting a litany of hall-of-famers. *** topic: emojis. characterization 1: shorthand for a digital generation. characterization 2: more versatile than words. characterization 3: the latest frontier in language. characterization 4: a form of self-expression. characterization 5: quintessentially millennial. characterization 6: reflective of a tech-centric world. *** topic: ``` ``` regular: illinois went against the census' population-loss prediction by getting more residents. VBG: defying the census' prediction of population loss, illinois experienced growth. *** regular: microsoft word’s high pricing increases the likelihood of competition. VBG: extortionately priced, microsoft word is inviting competition. *** regular: ``` ``` source: badminton should be more popular in the US. QUERY: Based on the given topic, can you develop a story outline? target: (1) games played with racquets are popular, (2) just look at tennis and ping pong, (3) but badminton underappreciated, (4) fun, fast-paced, competitive, (5) needs to be marketed more text: the sporting arena is dominated by games that are played with racquets. tennis and ping pong, in particular, are immensely popular. somewhat curiously, however, badminton is absent from this pantheon. exciting, fast-paced, and competitive, it is an underappreciated pastime. all that it lacks is more effective marketing. *** source: movies in theaters should be free. QUERY: Based on the given topic, can you develop a story outline? target: (1) movies provide vital life lessons, (2) many venues charge admission, (3) those without much money text: the lessons that movies impart are far from trivial. the vast catalogue of cinematic classics is replete with inspiring sagas of friendship, bravery, and tenacity. it is regrettable, then, that admission to theaters is not free. in their current form, the doors of this most vital of institutions are closed to those who lack the means to pay. *** source: ``` ``` in the private sector, { transparency } is vital to the business’s credibility. the { disclosure of information } can be the difference between success and failure. *** the labor market is changing, with { remote work } now the norm. this { flexible employment } allows the individual to design their own schedule. *** the { cubicle } is the locus of countless grievances. many complain that the { enclosed workspace } restricts their freedom of movement. *** ``` ``` it would be natural to assume that americans, as a people whose ancestors { immigrated to this country }, would be sympathetic to those seeking to do likewise. question: what does “do likewise” mean in the above context? (a) make the same journey (b) share in the promise of the american dream (c) start anew in the land of opportunity (d) make landfall on the united states *** in the private sector, { transparency } is vital to the business’s credibility. this orientation can be the difference between success and failure. question: what does “this orientation” mean in the above context? (a) visible business practices (b) candor with the public (c) open, honest communication (d) culture of accountability ``` ``` example: suppose you are a teacher. further suppose you want to tell an accurate telling of history. then suppose a parent takes offense. they do so in the name of name of their kid. this happens a lot. text: educators' responsibility to remain true to the historical record often clashes with the parent's desire to shelter their child from uncomfortable realities. *** example: suppose you are a student at college. now suppose you have to buy textbooks. that is going to be worth hundreds of dollars. given how much you already spend on tuition, that is going to hard cost to bear. text: the exorbitant cost of textbooks, which often reaches hundreds of dollars, imposes a sizable financial burden on the already-strapped college student. ``` ``` <Prefix> the atlanta hawks may attribute <Prefix> <Suffix> trae young <Suffix> <Middle> their robust season to <Middle> *** <Prefix> the nobel prize in literature <Prefix> <Suffix> honor <Suffix> <Middle> is a singularly prestigious <Middle> ``` ``` accustomed to having its name uttered ______, harvard university is weathering a rare spell of reputational tumult (a) in reverential tones (b) with great affection (c) in adulatory fashion (d) in glowing terms ``` ``` clarify: international ( {working together} / cooperation ) is called for when ( {issue go beyond lots of borders} / an issue transcends borders / a given matter has transnational implications ). ``` ``` description: when someone thinks that their view is the only right one. synonyms: intolerant, opinionated, narrow-minded, insular, self-righteous. *** description: when you put something off. synonyms: shelve, defer, table, postpone. ``` ``` organic sentence: crowdfunding is about winner of best ideas and it can test an entrepreneur’s idea. rewrite phrases: meritocratic, viability, vision rewritten with phrases: the meritocratic nature of crowdfunding empowers entrepreneurs to test their vision's viability. ``` *Note* Of all the masking techniques, this one works the best. ``` <Prefix> the atlanta hawks may attribute <Prefix> <Suffix> trae young <Suffix> <Middle> their robust season to <Middle> *** <Prefix> the nobel prize in literature <Prefix> <Suffix> honor <Suffix> <Middle> is a singularly prestigious <Middle> ``` ``` essence: when someone's views are keeping within reasonable. refine: the senator's voting record is ( moderate / centrist / pragmatic / balanced / fair-minded / even-handed ). *** essence: when things are worked through in a petty way. refine: the propensity of the u.s. congress to settle every dispute by way of ( mudslinging / bickering / demagoguery / name-calling / finger-pointing / vilification ) is appalling. ``` ``` description: when someone thinks that their view is the only right one. synonyms: intolerant, opinionated, narrow-minded, insular, self-righteous. *** description: when you put something off. synonyms: shelve, defer, table, postpone. ``` ``` organic sentence: crowdfunding is about winner of best ideas and it can test an entrepreneur’s idea. rewrite phrases: meritocratic, viability, vision rewritten with phrases: the meritocratic nature of crowdfunding empowers entrepreneurs to test their vision's viability. ``` ``` music before bedtime [makes for being able to relax] -> is a recipe for relaxation. ``` ``` [people wanting entertainment love traveling new york city] -> travelers flock to new york city in droves, drawn to its iconic entertainment scene. [cannot blame them] -> one cannot fault them [broadway so fun] -> when it is home to such thrilling fare as Broadway. ``` ``` in their ( ‖ when you are rushing because you want to get there on time ‖ / haste to arrive punctually / mad dash to be timely ), morning commuters are too rushed to whip up their own meal. *** politicians prefer to author vague plans rather than ( ‖ when you can make a plan without many unknowns ‖ / actionable policies / concrete solutions ). ``` ``` Q: What is whistleblower protection? A: Whistleblower protection is a form of legal immunity granted to employees who expose the unethical practices of their employer. Q: Why are whistleblower protections important? A: Absent whistleblower protections, employees would be deterred from exposing their employer’s wrongdoing for fear of retribution. Q: Why would an employer engage in retribution? A: An employer who has acted unethically stands to suffer severe financial and reputational damage were their transgressions to become public. To safeguard themselves from these consequences, they might seek to dissuade employees from exposing their wrongdoing. ``` ``` original: the meritocratic nature of crowdfunding [MASK] into their vision's viability. infill: the meritocratic nature of crowdfunding [gives investors idea of how successful] -> ( offers entrepreneurs a window ) into their vision's viability. ``` ``` Leadership | Lecture 17: Worker Morale What Workers Look for in Companies: • Benefits o Tuition reimbursement o Paid parental leave o 401K matching o Profit sharing o Pension plans o Free meals • Social responsibility o Environmental stewardship o Charitable contributions o Diversity • Work-life balance o Telecommuting o Paid holidays and vacation o Casual dress • Growth opportunities • Job security • Competitive compensation • Recognition o Open-door policies o Whistleblower protection o Employee-of-the-month awards o Positive performance reviews o Bonuses ``` ``` description: business keywords: for-profit, fiduciary duty, monopolistic, bottom line, return on investment, short-term thinking, capital-intensive, self-interested, risk-taking, fiduciary duty, merger, speculation, profiteering, oversight, capitalism, diversification ``` ``` 3. In this task, you are given a company name and you need to find its industry. McDonalds -- Restaurant Facebook -- Social Network IKEA -- Furniture American Express -- Credit Services Nokia -- Telecom Nintendo -- Entertainment 4. In this task, you are given a Month and you need to convert it to its corresponding season April -- Spring December -- Winter July -- Summer October -- Fall February -- Winter 5. In this task, you are given a sentence with a missing word and you need to predict the correct word. Managers should set an _____ for their employees. -- example Some people spend more than four _____ in the gym. -- hours The police were on the _____ of arresting the suspect. -- verge They were looking for _____ on how to solve the problem. -- guidance What is the _____ of the coffee? -- price 6. In this task, you are given a paragraph and you need to reorder it to make it logical. It was first proposed in 1987. The total length of the bridge is 1,828 meters. The idea of a bridge connects Hong Kong to Macau. -- The idea of bridge connecting Hong Kong and Macau was first proposed in 1987. The total length of the bridge is 1,828 meters. It is a movie about a brave and noble policeman. The film was produced by Americans. They were Kevin Lima and Chris Buck. They are directors. The movie is called Tarzan. -- Produced by Americans Kevin Lima and Chris Buck, Tarzan is a movie about a brave and noble policeman. It was first discovered in the mountains of India. The active ingredients in this plant can stimulate hair growth. The plant is called "Hair Plus." -- First discovered in the mountains of India, Hair Plus is a plant whose active ingredients can stimulate hair growth. ``` ``` trivia: What is the population of South Korea? response: 51 million. *** trivia: What is the minimum voting age in the US? response: 18. *** trivia: What are the first ten amendments of the US constitution called? response: Bill of Rights. ```
svjack/prompt-extend-chinese
svjack
2022-11-08T03:05:03Z
106
3
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "MT5", "text-to-text", "zh", "Chinese", "license:other", "autotrain_compatible", "region:us" ]
text2text-generation
2022-11-07T12:12:51Z
--- language: zh license: other tags: - MT5 - mt5 - text-to-text - zh - Chinese inference: false extra_gated_prompt: |- The License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. rinna Co., Ltd. claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license By clicking on "Access repository" below, you accept that your *contact information* (email address and username) can be shared with the model authors as well. extra_gated_fields: I have read the License and agree with its terms: checkbox --- # Chinese Stable Diffusion Prompt Extend Model Card <!-- ![rinna](https://github.com/rinnakk/japanese-clip/blob/master/data/rinna.png?raw=true) --> svjack/prompt-extend-chinese is a Chinese-specific latent text-to-text generator generating style cues given a short Chinese prompt input. This generator may make the Stable Diffusion model perform well with the help of some meaningful style cues.<br/> The above idea is sourced from a project named [prompt-extend](https://github.com/daspartho/prompt-extend), it extending stable diffusion English prompts with suitable style cues using text generation. And people can try it on [HuggingFace Space](https://huggingface.co/spaces/daspartho/prompt-extend). ```python from transformers import T5Tokenizer, MT5ForConditionalGeneration model = "svjack/prompt-extend-chinese" device = "cpu" tokenizer = T5Tokenizer.from_pretrained(model) model = MT5ForConditionalGeneration.from_pretrained(model).to(device).eval() prompt = "护国公克伦威尔" encode = tokenizer(prompt, return_tensors='pt').to(device) answer = model.generate(encode.input_ids)[0] decoded = tokenizer.decode(answer, skip_special_tokens=True) decoded ''' 的肖像,由,和,制作,在艺术站上趋势 ''' ``` With the help of this generator, people can give some enhance to the stable diffusion model. Take [svjack/Stable-Diffusion-FineTuned-zh-v1](https://huggingface.co/svjack/Stable-Diffusion-FineTuned-zh-v1) for example. below image is the enhanced version of above. 第一次世界大战 ![第一次世界大战](https://github.com/svjack/Stable-Diffusion-Chinese-Extend/blob/main/imgs/war_v1.jpg?raw=true) 第一次世界大战,在艺术站的潮流,8,高度详细,高质量,高分辨率,获 ![第一次世界大战,在艺术站的潮流,8,高度详细,高质量,高分辨率,获](https://github.com/svjack/Stable-Diffusion-Chinese-Extend/blob/main/imgs/war_style_v1.jpg?raw=true) And below example is pivotal. 护国公克伦威尔 ![护国公克伦威尔](https://github.com/svjack/Stable-Diffusion-Chinese-Extend/blob/main/Protector_Cromwell.png?raw=true) 护国公克伦威尔,的肖像,由,和,制作,在艺术站上趋势 ![护国公克伦威尔,的肖像,由,和,制作,在艺术站上趋势](https://github.com/svjack/Stable-Diffusion-Chinese-Extend/blob/main/Protector_Cromwell_style.png?raw=true)
PrimeQA/DrDecr-large_XOR-TyDi_whitebox
PrimeQA
2022-11-08T02:57:16Z
0
0
null
[ "arxiv:2112.08185", "region:us" ]
null
2022-11-08T01:05:48Z
# Basic Information This is the Dr. Decr-large model used in XOR-TyDi leaderboard task 1 whitebox submission. https://nlp.cs.washington.edu/xorqa/ The detailed implementation of the model can be found in: https://arxiv.org/pdf/2112.08185.pdf Source code to train the model can be found via PrimeQA's IR component: https://github.com/primeqa/primeqa/tree/main/examples/drdecr It is a Neural IR model built on top of the ColBERTv2 api and not directly compatible with Huggingface API. The inference result on XOR Dev dataset is: ``` R@2kt R@5kt ko 69.1 75.1 ar 68.0 75.7 bn 81.9 85.2 fi 68.2 73.6 ru 67.1 72.2 ja 63.1 69.7 te 82.8 86.1 Avg 71.4 76.8 ``` # Limitations and Bias This model used pre-trained XLMR-large model and fine tuned on 7 languages in XOR-TyDi leaderboard. The performance of other languages was not tested. Since the model was fine-tuned on a large pre-trained language model XLM-Roberta, biases associated with the pre-existing XLM-Roberta model may be present in our fine-tuned model, Dr. Decr # Citation ``` @article{Li2021_DrDecr, doi = {10.48550/ARXIV.2112.08185}, url = {https://arxiv.org/abs/2112.08185}, author = {Li, Yulong and Franz, Martin and Sultan, Md Arafat and Iyer, Bhavani and Lee, Young-Suk and Sil, Avirup}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Learning Cross-Lingual IR from an English Retriever}, publisher = {arXiv}, year = {2021} } ```
gabrielgmendonca/bert-base-portuguese-cased-finetuned-chico-xavier
gabrielgmendonca
2022-11-08T01:25:38Z
106
0
transformers
[ "transformers", "pytorch", "tf", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-09-25T18:00:16Z
--- license: mit tags: - generated_from_trainer model-index: - name: bert-base-portuguese-cased-finetuned-chico-xavier results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-portuguese-cased-finetuned-chico-xavier This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7196 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0733 | 1.0 | 561 | 1.8147 | | 1.8779 | 2.0 | 1122 | 1.7624 | | 1.8345 | 3.0 | 1683 | 1.7206 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
Asfesalas/ppo-LunarLander-v2
Asfesalas
2022-11-08T00:40:57Z
6
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-11-08T00:40:18Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 234.84 +/- 22.67 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
kit-nlp/bert-base-japanese-basic-char-v2-irony
kit-nlp
2022-11-08T00:10:26Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "ja", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-07T07:33:23Z
--- language: ja license: cc-by-sa-4.0 --- # bert-base-irony This is a BERT Base model for the Japanese language finetuned for automatic irony detection. The model was based on [BERT base Japanese](https://huggingface.co/hiroshi-matsuda-rit/bert-base-japanese-basic-char-v2), and later finetuned on a dataset containing ironic and sarcastic tweets. ## Licenses The finetuned model with all attached files is licensed under [CC BY-SA 4.0](http://creativecommons.org/licenses/by-sa/4.0/), or Creative Commons Attribution-ShareAlike 4.0 International License. <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a> ## Citations Please, cite this model using the following citation. ``` @inproceedings{dan2022bert-base-irony, title={北見工業大学 テキスト情報処理研究室 BERT Base 皮肉検出モデル (RIT ver.)}, author={団 俊輔 and プタシンスキ ミハウ and ジェプカ ラファウ and 桝井 文人}, publisher={HuggingFace}, year={2022}, url = "https://huggingface.co/kit-nlp/bert-base-japanese-basic-char-v2-irony" } ```
Gr00t16/distilbert-imdb
Gr00t16
2022-11-07T23:24:30Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-07T22:53:28Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy model-index: - name: distilbert-imdb results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.92916 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.1827 - Accuracy: 0.9292 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2182 | 1.0 | 1563 | 0.1827 | 0.9292 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
LisanneH/AgeEstimation
LisanneH
2022-11-07T22:50:04Z
0
3
null
[ "license:unknown", "region:us" ]
null
2022-11-05T19:36:37Z
--- license: unknown --- # Age estimation in supermarkets The model analyzed in this card estimates someone's age. This project has been done for the master Applied Artificial Intelligence and is about estimating ages in supermarkets when a person wants to buy alcohol. This model's goal is to only estimate ages in an image. It will not cover ethnicities or gender. ## Model description **Used dataset:** UTKFace images - This dataset contains roughly 24K face images. - The age of a person on the picture is labeled in the filename of that image. - Since we do not have use for baby images, we decided to cut these out of the dataset, so there are 21K images left. **Model input:** Facial images **Model output:** For a face in a picture, the model will return the estimated age of that person. The model output also gives a confidence score for the estimation. **Model architecture:** A Convolutional Neural Network. This CNN will perform a regression analysis to estimates the ages. ## Performance To determine the performance of the model, the following metrics have been used: - MSE, this metric measures how close the regression line is to the data points. <br> &ensp; - *Our model's MSE:* 60.9 - RMSE, this metric measures the mean error that can be made. <br> &ensp; - *Our model's RMSE:* 7.8 - MAE, this is a measure for model accuracy. The MAE is the average error that the model's predictions have in comparison with their corresponding actual targets. <br> &ensp; - *Our model's MAE:* 5.2 Ideally, the RMSE and the MAE should be close to each other. When there is a big difference in these two numbers, it is an indication of variance in the individually errors. Our results show that the prediction model can be around 8 years off of the actual age of a person. We also looked at how the model performs in different age, gender and race classes. It seemed the model predicted the ages of people between 20 and 30 better than the rest. The model could also predict the ages of females better than males. The race that the model can predict the best is East Asian. ## Limitations - **Lighting** <br> When the lighting is poor, the age estimation can be poor as well - **Occlusion** <br> Partially hidden or obstructed faces might not be detected. (e.g. face masks) - **UTKFace** <br> The ages in this dataset are in itself estimation from a previous model. Since we do not know the exact ages of the people in the images, our model will not be the most reliable. ## Training and evaluation data Train data: 70% Test data: 30% Our model has been made by trial and error. The following architecture is the outcome: - Hidden layers: 7 - Batch size: 128 - Epochs: 65 - Optimizer: adam - Activation: ReLu & Linear
nsridhar/roberta-finetuned-country
nsridhar
2022-11-07T22:34:06Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "question-answering", "generated_from_trainer", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
question-answering
2022-11-07T22:16:55Z
--- license: cc-by-4.0 tags: - generated_from_trainer model-index: - name: roberta-finetuned-country results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-finetuned-country This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
AlekseyKorshuk/amazon-reviews-input-output-6.7b-best
AlekseyKorshuk
2022-11-07T22:14:21Z
7
1
transformers
[ "transformers", "pytorch", "opt", "text-generation", "generated_from_trainer", "dataset:AlekseyKorshuk/amazon-reviews-input-output", "license:other", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-07T21:47:46Z
--- license: other tags: - generated_from_trainer datasets: - AlekseyKorshuk/amazon-reviews-input-output metrics: - accuracy model-index: - name: amazon-reviews-input-output-6.7b-best results: - task: name: Causal Language Modeling type: text-generation dataset: name: AlekseyKorshuk/amazon-reviews-input-output type: AlekseyKorshuk/amazon-reviews-input-output metrics: - name: Accuracy type: accuracy value: 0.040325203252032524 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # amazon-reviews-input-output-6.7b-best This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on the AlekseyKorshuk/amazon-reviews-input-output dataset. It achieves the following results on the evaluation set: - Loss: 2.6953 - Accuracy: 0.0403 - Samples: 100 - Perplexity: 14.8101 - Table: <wandb.data_types.Table object at 0x7fc684448b50> ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.9912 | 0.06 | 1 | 2.7441 | 0.0404 | | 2.9329 | 0.12 | 2 | 2.7441 | 0.0404 | | 2.9138 | 0.19 | 3 | 2.8262 | 0.0389 | | 2.9395 | 0.25 | 4 | 2.8262 | 0.0389 | | 2.9109 | 0.31 | 5 | 2.7949 | 0.0399 | | 2.8391 | 0.38 | 6 | 2.7461 | 0.0403 | | 2.9368 | 0.44 | 7 | 2.7207 | 0.0398 | | 2.7583 | 0.5 | 8 | 2.7070 | 0.0403 | | 2.9756 | 0.56 | 9 | 2.6836 | 0.0408 | | 2.8442 | 0.62 | 10 | 2.6738 | 0.0403 | | 2.7312 | 0.69 | 11 | 2.6680 | 0.0405 | | 2.7439 | 0.75 | 12 | 2.6699 | 0.0404 | | 2.9075 | 0.81 | 13 | 2.6797 | 0.0403 | | 2.8518 | 0.88 | 14 | 2.6797 | 0.0403 | | 2.8579 | 0.94 | 15 | 2.6777 | 0.0404 | | 2.8916 | 1.0 | 16 | 2.6953 | 0.0403 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
SiddharthaM/resnet-18-feature-extraction
SiddharthaM
2022-11-07T21:50:04Z
114
0
transformers
[ "transformers", "pytorch", "tensorboard", "resnet", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-11-07T17:47:48Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy - precision - recall - f1 model-index: - name: resnet-18-feature-extraction results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.95 - name: Precision type: precision value: 0.9652777777777778 - name: Recall type: recall value: 0.9788732394366197 - name: F1 type: f1 value: 0.972027972027972 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # resnet-18-feature-extraction This model is a fine-tuned version of [microsoft/resnet-18](https://huggingface.co/microsoft/resnet-18) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1485 - Accuracy: 0.95 - Precision: 0.9653 - Recall: 0.9789 - F1: 0.9720 - Roc Auc: 0.8505 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:| | No log | 0.8 | 2 | 0.6232 | 0.75 | 0.9636 | 0.7465 | 0.8413 | 0.7621 | | No log | 1.8 | 4 | 0.6971 | 0.4875 | 1.0 | 0.4225 | 0.5941 | 0.7113 | | No log | 2.8 | 6 | 0.7915 | 0.2875 | 1.0 | 0.1972 | 0.3294 | 0.5986 | | No log | 3.8 | 8 | 0.8480 | 0.2875 | 1.0 | 0.1972 | 0.3294 | 0.5986 | | 0.8651 | 4.8 | 10 | 0.9094 | 0.2562 | 1.0 | 0.1620 | 0.2788 | 0.5810 | | 0.8651 | 5.8 | 12 | 0.7470 | 0.5625 | 1.0 | 0.5070 | 0.6729 | 0.7535 | | 0.8651 | 6.8 | 14 | 0.5915 | 0.85 | 1.0 | 0.8310 | 0.9077 | 0.9155 | | 0.8651 | 7.8 | 16 | 0.4817 | 0.8875 | 0.9844 | 0.8873 | 0.9333 | 0.8881 | | 0.8651 | 8.8 | 18 | 0.3455 | 0.9187 | 0.9778 | 0.9296 | 0.9531 | 0.8815 | | 0.5349 | 9.8 | 20 | 0.2966 | 0.9187 | 0.9708 | 0.9366 | 0.9534 | 0.8572 | | 0.5349 | 10.8 | 22 | 0.2347 | 0.95 | 0.9653 | 0.9789 | 0.9720 | 0.8505 | | 0.5349 | 11.8 | 24 | 0.2468 | 0.9313 | 0.9645 | 0.9577 | 0.9611 | 0.8400 | | 0.5349 | 12.8 | 26 | 0.2310 | 0.9563 | 0.9720 | 0.9789 | 0.9754 | 0.8783 | | 0.5349 | 13.8 | 28 | 0.2083 | 0.9313 | 0.9580 | 0.9648 | 0.9614 | 0.8157 | | 0.3593 | 14.8 | 30 | 0.1840 | 0.9375 | 0.9521 | 0.9789 | 0.9653 | 0.7950 | | 0.3593 | 15.8 | 32 | 0.1947 | 0.9375 | 0.9648 | 0.9648 | 0.9648 | 0.8435 | | 0.3593 | 16.8 | 34 | 0.1837 | 0.9313 | 0.9517 | 0.9718 | 0.9617 | 0.7915 | | 0.3593 | 17.8 | 36 | 0.1819 | 0.9437 | 0.9524 | 0.9859 | 0.9689 | 0.7985 | | 0.3593 | 18.8 | 38 | 0.1924 | 0.9437 | 0.9650 | 0.9718 | 0.9684 | 0.8470 | | 0.2737 | 19.8 | 40 | 0.1990 | 0.95 | 0.9653 | 0.9789 | 0.9720 | 0.8505 | | 0.2737 | 20.8 | 42 | 0.1759 | 0.95 | 0.9718 | 0.9718 | 0.9718 | 0.8748 | | 0.2737 | 21.8 | 44 | 0.1804 | 0.9313 | 0.9517 | 0.9718 | 0.9617 | 0.7915 | | 0.2737 | 22.8 | 46 | 0.1666 | 0.9313 | 0.9517 | 0.9718 | 0.9617 | 0.7915 | | 0.2737 | 23.8 | 48 | 0.1534 | 0.9437 | 0.9524 | 0.9859 | 0.9689 | 0.7985 | | 0.2278 | 24.8 | 50 | 0.1612 | 0.9375 | 0.9521 | 0.9789 | 0.9653 | 0.7950 | | 0.2278 | 25.8 | 52 | 0.1535 | 0.9437 | 0.9586 | 0.9789 | 0.9686 | 0.8228 | | 0.2278 | 26.8 | 54 | 0.1568 | 0.9437 | 0.9716 | 0.9648 | 0.9682 | 0.8713 | | 0.2278 | 27.8 | 56 | 0.2107 | 0.9375 | 0.9714 | 0.9577 | 0.9645 | 0.8678 | | 0.2278 | 28.8 | 58 | 0.1592 | 0.9313 | 0.9517 | 0.9718 | 0.9617 | 0.7915 | | 0.2057 | 29.8 | 60 | 0.1557 | 0.9375 | 0.9648 | 0.9648 | 0.9648 | 0.8435 | | 0.2057 | 30.8 | 62 | 0.1714 | 0.9437 | 0.9650 | 0.9718 | 0.9684 | 0.8470 | | 0.2057 | 31.8 | 64 | 0.1571 | 0.95 | 0.9653 | 0.9789 | 0.9720 | 0.8505 | | 0.2057 | 32.8 | 66 | 0.1574 | 0.9375 | 0.9583 | 0.9718 | 0.9650 | 0.8192 | | 0.2057 | 33.8 | 68 | 0.1423 | 0.9563 | 0.9720 | 0.9789 | 0.9754 | 0.8783 | | 0.2 | 34.8 | 70 | 0.1677 | 0.9437 | 0.9650 | 0.9718 | 0.9684 | 0.8470 | | 0.2 | 35.8 | 72 | 0.1560 | 0.9375 | 0.9583 | 0.9718 | 0.9650 | 0.8192 | | 0.2 | 36.8 | 74 | 0.1594 | 0.9375 | 0.9521 | 0.9789 | 0.9653 | 0.7950 | | 0.2 | 37.8 | 76 | 0.1512 | 0.9437 | 0.9586 | 0.9789 | 0.9686 | 0.8228 | | 0.2 | 38.8 | 78 | 0.1396 | 0.9563 | 0.9655 | 0.9859 | 0.9756 | 0.8541 | | 0.1838 | 39.8 | 80 | 0.1509 | 0.9375 | 0.9583 | 0.9718 | 0.9650 | 0.8192 | | 0.1838 | 40.8 | 82 | 0.1529 | 0.95 | 0.9718 | 0.9718 | 0.9718 | 0.8748 | | 0.1838 | 41.8 | 84 | 0.1506 | 0.95 | 0.9653 | 0.9789 | 0.9720 | 0.8505 | | 0.1838 | 42.8 | 86 | 0.1549 | 0.95 | 0.9653 | 0.9789 | 0.9720 | 0.8505 | | 0.1838 | 43.8 | 88 | 0.1331 | 0.9563 | 0.9655 | 0.9859 | 0.9756 | 0.8541 | | 0.1872 | 44.8 | 90 | 0.1409 | 0.9437 | 0.9524 | 0.9859 | 0.9689 | 0.7985 | | 0.1872 | 45.8 | 92 | 0.1639 | 0.9375 | 0.9583 | 0.9718 | 0.9650 | 0.8192 | | 0.1872 | 46.8 | 94 | 0.1391 | 0.95 | 0.9589 | 0.9859 | 0.9722 | 0.8263 | | 0.1872 | 47.8 | 96 | 0.1436 | 0.9563 | 0.9655 | 0.9859 | 0.9756 | 0.8541 | | 0.1872 | 48.8 | 98 | 0.1442 | 0.9437 | 0.9586 | 0.9789 | 0.9686 | 0.8228 | | 0.185 | 49.8 | 100 | 0.1485 | 0.95 | 0.9653 | 0.9789 | 0.9720 | 0.8505 | ### Framework versions - Transformers 4.24.0.dev0 - Pytorch 1.11.0+cu102 - Datasets 2.6.1 - Tokenizers 0.13.1
AlekseyKorshuk/amazon-reviews-input-output-6.7b
AlekseyKorshuk
2022-11-07T21:41:55Z
6
0
transformers
[ "transformers", "pytorch", "opt", "text-generation", "generated_from_trainer", "dataset:AlekseyKorshuk/amazon-reviews-input-output", "license:other", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-07T19:53:31Z
--- license: other tags: - generated_from_trainer datasets: - AlekseyKorshuk/amazon-reviews-input-output metrics: - accuracy model-index: - name: amazon-reviews-input-output-6.7b results: - task: name: Causal Language Modeling type: text-generation dataset: name: AlekseyKorshuk/amazon-reviews-input-output type: AlekseyKorshuk/amazon-reviews-input-output metrics: - name: Accuracy type: accuracy value: 0.03882113821138211 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # amazon-reviews-input-output-6.7b This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on the AlekseyKorshuk/amazon-reviews-input-output dataset. It achieves the following results on the evaluation set: - Loss: 2.8574 - Accuracy: 0.0388 - Samples: 100 - Perplexity: 17.4166 - Table: <wandb.data_types.Table object at 0x7fd30eb4e940> ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.9912 | 0.06 | 1 | 2.7441 | 0.0404 | | 2.9329 | 0.12 | 2 | 2.7441 | 0.0404 | | 2.9138 | 0.19 | 3 | 2.8262 | 0.0389 | | 2.9395 | 0.25 | 4 | 2.8262 | 0.0389 | | 2.9109 | 0.31 | 5 | 2.7949 | 0.0399 | | 2.8394 | 0.38 | 6 | 2.7461 | 0.0403 | | 2.9365 | 0.44 | 7 | 2.7207 | 0.0399 | | 2.7588 | 0.5 | 8 | 2.7070 | 0.0403 | | 2.9751 | 0.56 | 9 | 2.6816 | 0.0407 | | 2.844 | 0.62 | 10 | 2.6738 | 0.0404 | | 2.731 | 0.69 | 11 | 2.6680 | 0.0406 | | 2.7434 | 0.75 | 12 | 2.6699 | 0.0404 | | 2.9043 | 0.81 | 13 | 2.6855 | 0.0400 | | 2.8564 | 0.88 | 14 | 2.6855 | 0.0400 | | 2.8716 | 0.94 | 15 | 2.6855 | 0.0400 | | 2.896 | 1.0 | 16 | 2.6953 | 0.0398 | | 1.9858 | 1.06 | 17 | 2.7070 | 0.0400 | | 2.0563 | 1.12 | 18 | 2.7285 | 0.0400 | | 2.04 | 1.19 | 19 | 2.7676 | 0.0398 | | 1.9885 | 1.25 | 20 | 2.7910 | 0.0396 | | 2.09 | 1.31 | 21 | 2.7969 | 0.0393 | | 2.059 | 1.38 | 22 | 2.8105 | 0.0395 | | 2.0498 | 1.44 | 23 | 2.7930 | 0.0398 | | 1.9568 | 1.5 | 24 | 2.7910 | 0.0401 | | 2.1418 | 1.56 | 25 | 2.7930 | 0.0398 | | 1.975 | 1.62 | 26 | 2.7930 | 0.0397 | | 1.996 | 1.69 | 27 | 2.7949 | 0.0393 | | 1.9617 | 1.75 | 28 | 2.8047 | 0.0392 | | 2.2062 | 1.81 | 29 | 2.8145 | 0.0388 | | 1.9929 | 1.88 | 30 | 2.8145 | 0.0386 | | 1.9235 | 1.94 | 31 | 2.8281 | 0.0390 | | 1.9127 | 2.0 | 32 | 2.8574 | 0.0388 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
understaters/ddpm-butterflies-128
understaters
2022-11-07T21:22:47Z
3
0
diffusers
[ "diffusers", "tensorboard", "en", "dataset:huggan/smithsonian_butterflies_subset", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
null
2022-11-07T20:04:40Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/smithsonian_butterflies_subset metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/understaters/ddpm-butterflies-128/tensorboard?#scalars)
bishalbaaniya/bishalbaaniya-finetuned-myaamia-to-english
bishalbaaniya
2022-11-07T21:15:33Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-27T03:24:09Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - bleu model-index: - name: bishalbaaniya-finetuned-myaamia-to-english results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bishalbaaniya-finetuned-myaamia-to-english This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.0090 - Bleu: 0.1637 - Gen Len: 7.977 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | 4.1712 | 1.0 | 1082 | 4.0090 | 0.1637 | 7.977 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
ntsema/wav2vec2-xlsr-53-espeak-cv-ft-tat-ntsema-colab
ntsema
2022-11-07T20:52:34Z
125
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:audiofolder", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-07T08:05:09Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - audiofolder metrics: - wer model-index: - name: wav2vec2-xlsr-53-espeak-cv-ft-tat-ntsema-colab results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: audiofolder type: audiofolder config: default split: train args: default metrics: - name: Wer type: wer value: 0.28339140534262486 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xlsr-53-espeak-cv-ft-tat-ntsema-colab This model is a fine-tuned version of [facebook/wav2vec2-xlsr-53-espeak-cv-ft](https://huggingface.co/facebook/wav2vec2-xlsr-53-espeak-cv-ft) on the audiofolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2976 - Wer: 0.2834 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.5013 | 3.57 | 400 | 0.4017 | 0.4837 | | 0.3368 | 7.14 | 800 | 0.2774 | 0.3693 | | 0.1942 | 10.71 | 1200 | 0.3054 | 0.3386 | | 0.1449 | 14.28 | 1600 | 0.3085 | 0.3246 | | 0.1147 | 17.85 | 2000 | 0.3134 | 0.3037 | | 0.0944 | 21.43 | 2400 | 0.3046 | 0.2933 | | 0.0778 | 24.99 | 2800 | 0.3057 | 0.2927 | | 0.0643 | 28.57 | 3200 | 0.2976 | 0.2834 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.14.0.dev20221107+cu116 - Datasets 2.6.1 - Tokenizers 0.13.2
okho0653/distilbert-base-uncased-finetuned-sst-2-english-zero-shot
okho0653
2022-11-07T20:48:31Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-07T20:44:51Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-sst-2-english-zero-shot results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-sst-2-english-zero-shot This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 5.2284 - eval_accuracy: 0.0 - eval_f1: 0.0 - eval_runtime: 0.9696 - eval_samples_per_second: 27.845 - eval_steps_per_second: 2.063 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
AlekseyKorshuk/amazon-reviews-input-output-1.3b
AlekseyKorshuk
2022-11-07T20:45:36Z
5
0
transformers
[ "transformers", "pytorch", "opt", "text-generation", "generated_from_trainer", "dataset:AlekseyKorshuk/amazon-reviews-input-output", "license:other", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-07T20:26:17Z
--- license: other tags: - generated_from_trainer datasets: - AlekseyKorshuk/amazon-reviews-input-output metrics: - accuracy model-index: - name: amazon-reviews-input-output-1.3b results: - task: name: Causal Language Modeling type: text-generation dataset: name: AlekseyKorshuk/amazon-reviews-input-output type: AlekseyKorshuk/amazon-reviews-input-output metrics: - name: Accuracy type: accuracy value: 0.03550813008130081 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # amazon-reviews-input-output-1.3b This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the AlekseyKorshuk/amazon-reviews-input-output dataset. It achieves the following results on the evaluation set: - Loss: 3.5488 - Accuracy: 0.0355 - Samples: 100 - Perplexity: 34.7725 - Table: <wandb.data_types.Table object at 0x7ffa3c3fd700> ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.2024 | 0.06 | 1 | 2.9121 | 0.0385 | | 3.1226 | 0.12 | 2 | 2.9121 | 0.0385 | | 3.1321 | 0.19 | 3 | 2.8477 | 0.0394 | | 2.9875 | 0.25 | 4 | 2.8477 | 0.0394 | | 2.9717 | 0.31 | 5 | 2.8555 | 0.0391 | | 2.9341 | 0.38 | 6 | 2.8438 | 0.0392 | | 3.0376 | 0.44 | 7 | 2.8184 | 0.0396 | | 2.8164 | 0.5 | 8 | 2.7988 | 0.0395 | | 3.0857 | 0.56 | 9 | 2.7988 | 0.0394 | | 2.9492 | 0.62 | 10 | 2.7969 | 0.0395 | | 2.8633 | 0.69 | 11 | 2.7969 | 0.0395 | | 2.8994 | 0.75 | 12 | 2.7910 | 0.0398 | | 3.0024 | 0.81 | 13 | 2.7812 | 0.0401 | | 2.937 | 0.88 | 14 | 2.7812 | 0.0399 | | 2.9963 | 0.94 | 15 | 2.7812 | 0.0399 | | 3.0168 | 1.0 | 16 | 2.7754 | 0.04 | | 2.2589 | 1.06 | 17 | 2.7715 | 0.0397 | | 2.2568 | 1.12 | 18 | 2.7793 | 0.0395 | | 2.3138 | 1.19 | 19 | 2.8027 | 0.0393 | | 2.2759 | 1.25 | 20 | 2.8184 | 0.0393 | | 2.5137 | 1.31 | 21 | 2.8262 | 0.0390 | | 2.2997 | 1.38 | 22 | 2.8320 | 0.0388 | | 2.2693 | 1.44 | 23 | 2.8359 | 0.0392 | | 2.204 | 1.5 | 24 | 2.8379 | 0.0387 | | 2.3713 | 1.56 | 25 | 2.8359 | 0.0391 | | 2.3448 | 1.62 | 26 | 2.8340 | 0.0391 | | 2.217 | 1.69 | 27 | 2.8359 | 0.0391 | | 2.3082 | 1.75 | 28 | 2.8379 | 0.0385 | | 2.2878 | 1.81 | 29 | 2.8379 | 0.0386 | | 2.2429 | 1.88 | 30 | 2.8379 | 0.0385 | | 2.2838 | 1.94 | 31 | 2.8359 | 0.0385 | | 2.4038 | 2.0 | 32 | 2.8379 | 0.0387 | | 1.8481 | 2.06 | 33 | 2.8555 | 0.0384 | | 1.657 | 2.12 | 34 | 2.8965 | 0.0382 | | 1.6996 | 2.19 | 35 | 2.9590 | 0.0380 | | 1.6741 | 2.25 | 36 | 3.0312 | 0.0379 | | 1.594 | 2.31 | 37 | 3.0410 | 0.0380 | | 1.5201 | 2.38 | 38 | 3.0156 | 0.0381 | | 1.5149 | 2.44 | 39 | 3.0137 | 0.0380 | | 1.5521 | 2.5 | 40 | 3.0176 | 0.0379 | | 1.5364 | 2.56 | 41 | 3.0273 | 0.0378 | | 1.5385 | 2.62 | 42 | 3.0391 | 0.0380 | | 1.4794 | 2.69 | 43 | 3.0488 | 0.0380 | | 1.4313 | 2.75 | 44 | 3.0527 | 0.0378 | | 1.5071 | 2.81 | 45 | 3.0469 | 0.0378 | | 1.4799 | 2.88 | 46 | 3.0449 | 0.0378 | | 1.521 | 2.94 | 47 | 3.0371 | 0.0380 | | 1.4603 | 3.0 | 48 | 3.0410 | 0.0379 | | 1.25 | 3.06 | 49 | 3.0859 | 0.0381 | | 1.0411 | 3.12 | 50 | 3.1797 | 0.0375 | | 1.0385 | 3.19 | 51 | 3.2969 | 0.0371 | | 1.0254 | 3.25 | 52 | 3.3613 | 0.0367 | | 0.9656 | 3.31 | 53 | 3.3633 | 0.0368 | | 1.036 | 3.38 | 54 | 3.3359 | 0.0366 | | 0.9366 | 3.44 | 55 | 3.2949 | 0.0366 | | 0.9712 | 3.5 | 56 | 3.2695 | 0.0367 | | 1.0066 | 3.56 | 57 | 3.2676 | 0.0366 | | 0.9952 | 3.62 | 58 | 3.2773 | 0.0368 | | 1.0352 | 3.69 | 59 | 3.2891 | 0.0367 | | 1.0212 | 3.75 | 60 | 3.3164 | 0.0362 | | 0.9468 | 3.81 | 61 | 3.3203 | 0.0360 | | 0.9155 | 3.88 | 62 | 3.3223 | 0.0366 | | 0.8552 | 3.94 | 63 | 3.3262 | 0.0370 | | 0.9575 | 4.0 | 64 | 3.3340 | 0.0370 | | 0.6384 | 4.06 | 65 | 3.375 | 0.0370 | | 0.6436 | 4.12 | 66 | 3.4453 | 0.0364 | | 0.5752 | 4.19 | 67 | 3.5391 | 0.0358 | | 0.6542 | 4.25 | 68 | 3.6016 | 0.0354 | | 0.6724 | 4.31 | 69 | 3.6016 | 0.0354 | | 0.591 | 4.38 | 70 | 3.5938 | 0.0359 | | 0.5346 | 4.44 | 71 | 3.5801 | 0.0361 | | 0.5112 | 4.5 | 72 | 3.5762 | 0.0361 | | 0.5443 | 4.56 | 73 | 3.5840 | 0.0362 | | 0.5689 | 4.62 | 74 | 3.6152 | 0.0358 | | 0.5667 | 4.69 | 75 | 3.6328 | 0.0358 | | 0.554 | 4.75 | 76 | 3.6348 | 0.0357 | | 0.6087 | 4.81 | 77 | 3.625 | 0.0355 | | 0.5236 | 4.88 | 78 | 3.6152 | 0.0355 | | 0.5458 | 4.94 | 79 | 3.5781 | 0.0355 | | 0.5702 | 5.0 | 80 | 3.5488 | 0.0355 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
okho0653/distilbert-base-zero-shot
okho0653
2022-11-07T20:44:16Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-07T20:40:31Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-base-zero-shot results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-zero-shot This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.7147 - eval_accuracy: 0.0741 - eval_f1: 0.1379 - eval_runtime: 1.1794 - eval_samples_per_second: 22.894 - eval_steps_per_second: 1.696 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
okho0653/Bio_ClinicalBERT-zero-shot
okho0653
2022-11-07T20:40:03Z
12
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-07T20:34:18Z
--- license: mit tags: - generated_from_trainer model-index: - name: Bio_ClinicalBERT-zero-shot results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Bio_ClinicalBERT-zero-shot This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.5417 - eval_accuracy: 1.0 - eval_f1: 1.0 - eval_runtime: 4.3261 - eval_samples_per_second: 6.241 - eval_steps_per_second: 0.462 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
edbeeching/atari_wizardofwor_3333
edbeeching
2022-11-07T20:29:12Z
4
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-11-07T20:28:16Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: atari_wizardofwor type: atari_wizardofwor metrics: - type: mean_reward value: 25500.00 +/- 0.00 name: mean_reward verified: false --- A(n) **APPO** model trained on the **atari_wizardofwor** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/atari_videopinball_3333
edbeeching
2022-11-07T20:27:56Z
1
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-11-07T20:26:42Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: atari_videopinball type: atari_videopinball metrics: - type: mean_reward value: nan +/- nan name: mean_reward verified: false --- A(n) **APPO** model trained on the **atari_videopinball** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/atari_upndown_3333
edbeeching
2022-11-07T20:25:01Z
1
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-11-07T20:23:31Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: atari_upndown type: atari_upndown metrics: - type: mean_reward value: nan +/- nan name: mean_reward verified: false --- A(n) **APPO** model trained on the **atari_upndown** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/atari_tutankham_3333
edbeeching
2022-11-07T20:23:10Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-11-07T20:22:04Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: atari_tutankham type: atari_tutankham metrics: - type: mean_reward value: nan +/- nan name: mean_reward verified: false --- A(n) **APPO** model trained on the **atari_tutankham** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/atari_timepilot_3333
edbeeching
2022-11-07T20:21:45Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-11-07T20:20:54Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: atari_timepilot type: atari_timepilot metrics: - type: mean_reward value: nan +/- nan name: mean_reward verified: false --- A(n) **APPO** model trained on the **atari_timepilot** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/atari_stargunner_3333
edbeeching
2022-11-07T20:19:02Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-11-07T20:18:00Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: atari_stargunner type: atari_stargunner metrics: - type: mean_reward value: nan +/- nan name: mean_reward verified: false --- A(n) **APPO** model trained on the **atari_stargunner** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/atari_spaceinvaders_3333
edbeeching
2022-11-07T20:17:41Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-11-07T20:16:43Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: atari_spaceinvaders type: atari_spaceinvaders metrics: - type: mean_reward value: 2212.50 +/- 2.50 name: mean_reward verified: false --- A(n) **APPO** model trained on the **atari_spaceinvaders** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/atari_seaquest_3333
edbeeching
2022-11-07T20:13:26Z
3
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-11-07T20:12:29Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: atari_seaquest type: atari_seaquest metrics: - type: mean_reward value: nan +/- nan name: mean_reward verified: false --- A(n) **APPO** model trained on the **atari_seaquest** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/atari_roadrunner_3333
edbeeching
2022-11-07T20:10:27Z
1
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-11-07T20:09:13Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: atari_roadrunner type: atari_roadrunner metrics: - type: mean_reward value: 84000.00 +/- 0.00 name: mean_reward verified: false --- A(n) **APPO** model trained on the **atari_roadrunner** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
IshyJahy/tatra603
IshyJahy
2022-11-07T20:08:06Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2022-11-07T20:08:06Z
--- license: creativeml-openrail-m ---
edbeeching/atari_qbert_3333
edbeeching
2022-11-07T20:07:22Z
1
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-11-07T20:06:16Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: atari_qbert type: atari_qbert metrics: - type: mean_reward value: nan +/- nan name: mean_reward verified: false --- A(n) **APPO** model trained on the **atari_qbert** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/atari_privateye_3333
edbeeching
2022-11-07T20:05:57Z
2
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-11-07T20:04:46Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: atari_privateye type: atari_privateye metrics: - type: mean_reward value: 100.00 +/- 0.00 name: mean_reward verified: false --- A(n) **APPO** model trained on the **atari_privateye** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/atari_pong_3333
edbeeching
2022-11-07T20:04:26Z
3
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-11-07T20:03:30Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: atari_pong type: atari_pong metrics: - type: mean_reward value: 21.00 +/- 0.00 name: mean_reward verified: false --- A(n) **APPO** model trained on the **atari_pong** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
artemnech/dialoT5-base
artemnech
2022-11-07T18:58:36Z
7
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-08-29T10:37:48Z
How to use: ``` from collections import deque from bs4 import BeautifulSoup import requests from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, T5Tokenizer import torch model_name = 'artemnech/dialoT5-base' model = AutoModelForSeq2SeqLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) def generate(text, **kwargs): model.eval() inputs = tokenizer(text, return_tensors='pt').to(model.device) with torch.no_grad(): hypotheses = model.generate(**inputs, **kwargs) return tokenizer.decode(hypotheses[0], skip_special_tokens=True) def dialog(context): keyword = generate('keyword: ' + ' '.join(context), num_beams=2,) knowlege = '' if keyword != 'no_keywords': resp = requests.get(f"https://en.wikipedia.org/wiki/{keyword}") root = BeautifulSoup(resp.content, "html.parser") knowlege ="knowlege: " + " ".join([_.text.strip() for _ in root.find("div", class_="mw-body-content mw-content-ltr").find_all("p", limit=2)]) answ = generate(f'dialog: ' + knowlege + ' '.join(context), num_beams=3, do_sample=True, temperature=1.1, encoder_no_repeat_ngram_size=5, no_repeat_ngram_size=5, max_new_tokens = 30) return answ context =deque([], maxlen=4) while True: text = input() text = 'user1>>: ' + text context.append(text) answ = dialog(context) context.append('user2>>: ' + answ) print('bot: ', answ) ```
azuresonance/bert-finetuned-ner
azuresonance
2022-11-07T18:08:45Z
15
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-11-07T17:58:09Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: train args: conll2003 metrics: - name: Precision type: precision value: 0.9351422898742554 - name: Recall type: recall value: 0.9511948838774823 - name: F1 type: f1 value: 0.943100283664275 - name: Accuracy type: accuracy value: 0.9867251427562254 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0604 - Precision: 0.9351 - Recall: 0.9512 - F1: 0.9431 - Accuracy: 0.9867 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0861 | 1.0 | 1756 | 0.0691 | 0.9094 | 0.9322 | 0.9206 | 0.9809 | | 0.034 | 2.0 | 3512 | 0.0605 | 0.9303 | 0.9482 | 0.9392 | 0.9861 | | 0.0162 | 3.0 | 5268 | 0.0604 | 0.9351 | 0.9512 | 0.9431 | 0.9867 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
versae/stt_nn-NO_conformer_transducer_large
versae
2022-11-07T17:57:43Z
4
0
nemo
[ "nemo", "region:us" ]
null
2022-11-07T17:51:45Z
Colab → https://colab.research.google.com/drive/1ggqsd5tu6cKf22EiKckbUNTJOwMMqKAh?usp=sharing
GuiGel/xlm-roberta-large-flert-finetune-meddocan
GuiGel
2022-11-07T17:36:11Z
3
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "region:us" ]
token-classification
2022-11-07T17:32:35Z
--- tags: - flair - token-classification - sequence-tagger-model --- ### Demo: How to use in Flair Requires: - **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("GuiGel/xlm-roberta-large-flert-finetune-meddocan") # make example sentence sentence = Sentence("On September 1st George won 1 dollar while watching Game of Thrones.") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ```
GuiGel/xlm-roberta-large-flert-we-finetune-meddocan
GuiGel
2022-11-07T17:31:26Z
4
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "region:us" ]
token-classification
2022-11-07T17:25:45Z
--- tags: - flair - token-classification - sequence-tagger-model --- ### Demo: How to use in Flair Requires: - **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("GuiGel/xlm-roberta-large-flert-we-finetune-meddocan") # make example sentence sentence = Sentence("On September 1st George won 1 dollar while watching Game of Thrones.") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ```
GuiGel/beto-uncased-flert-lstm-crf-meddocan
GuiGel
2022-11-07T17:09:39Z
3
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "region:us" ]
token-classification
2022-11-07T17:08:40Z
--- tags: - flair - token-classification - sequence-tagger-model --- ### Demo: How to use in Flair Requires: - **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("GuiGel/beto-uncased-flert-lstm-crf-meddocan") # make example sentence sentence = Sentence("On September 1st George won 1 dollar while watching Game of Thrones.") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ```
danduh/test-model
danduh
2022-11-07T16:34:50Z
0
0
null
[ "tf", "exbert", "danielTheBest", "TensorFlow", "en", "dataset:bookcorpus", "dataset:wikipedia", "license:apache-2.0", "region:us" ]
null
2022-11-07T16:15:52Z
--- language: en tags: - exbert - danielTheBest - TensorFlow license: apache-2.0 datasets: - bookcorpus - wikipedia --- Some cool text and in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = TFGPT2Model.from_pretrained('gpt2') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ```
jasonsheih/bert-base-uncased-finetuned-vr-comfort-description-review-epoch15-20221107_2125
jasonsheih
2022-11-07T14:57:55Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-07T14:08:17Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: bert-base-uncased-finetuned-vr-comfort-description-review-epoch15-20221107_2125 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-vr-comfort-description-review-epoch15-20221107_2125 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0521 - Accuracy: 0.8443 - F1: 0.8449 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7304 | 1.0 | 157 | 0.5838 | 0.7521 | 0.6457 | | 0.6062 | 2.0 | 314 | 0.5416 | 0.7593 | 0.7487 | | 0.4363 | 3.0 | 471 | 0.4852 | 0.8120 | 0.8139 | | 0.2679 | 4.0 | 628 | 0.5454 | 0.8204 | 0.8102 | | 0.164 | 5.0 | 785 | 0.6908 | 0.8060 | 0.8162 | | 0.112 | 6.0 | 942 | 0.7277 | 0.8287 | 0.8304 | | 0.0759 | 7.0 | 1099 | 0.9089 | 0.8096 | 0.8192 | | 0.0323 | 8.0 | 1256 | 0.8422 | 0.8551 | 0.8524 | | 0.0174 | 9.0 | 1413 | 1.0020 | 0.8299 | 0.8357 | | 0.0138 | 10.0 | 1570 | 0.9637 | 0.8491 | 0.8473 | | 0.0057 | 11.0 | 1727 | 1.0195 | 0.8503 | 0.8411 | | 0.0044 | 12.0 | 1884 | 1.0172 | 0.8455 | 0.8462 | | 0.0035 | 13.0 | 2041 | 1.0056 | 0.8503 | 0.8487 | | 0.002 | 14.0 | 2198 | 1.0554 | 0.8443 | 0.8451 | | 0.0014 | 15.0 | 2355 | 1.0521 | 0.8443 | 0.8449 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.11.0 - Datasets 1.16.1 - Tokenizers 0.10.3
Maxter825/2
Maxter825
2022-11-07T14:16:05Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2022-11-07T14:16:05Z
--- license: bigscience-openrail-m ---
cyburn/midjourney_v4_finetune
cyburn
2022-11-07T14:06:25Z
0
7
null
[ "region:us" ]
null
2022-11-06T23:53:48Z
# midjourney v4 finetune This model is based on SD1.5 with MSE VAE, finetuned on roughly 300 images created by midjourney v4 engine Prompt: `midjourney v4, <your prompt>` ## models - midjourney_v4-khoya-r12-e2-sd15.ckpt : epoch 2 - midjourney_v4-khoya-r12-e3-sd15.ckpt : epoch 3
AlekseyKorshuk/amazon-reviews-input-output-13b
AlekseyKorshuk
2022-11-07T12:37:54Z
5
0
transformers
[ "transformers", "pytorch", "opt", "text-generation", "generated_from_trainer", "dataset:AlekseyKorshuk/amazon-reviews-input-output", "license:other", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-07T11:52:10Z
--- license: other tags: - generated_from_trainer datasets: - AlekseyKorshuk/amazon-reviews-input-output metrics: - accuracy model-index: - name: amazon-reviews-input-output-13b results: - task: name: Causal Language Modeling type: text-generation dataset: name: AlekseyKorshuk/amazon-reviews-input-output type: AlekseyKorshuk/amazon-reviews-input-output metrics: - name: Accuracy type: accuracy value: 0.040426829268292684 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # amazon-reviews-input-output-13b This model is a fine-tuned version of [facebook/opt-13b](https://huggingface.co/facebook/opt-13b) on the AlekseyKorshuk/amazon-reviews-input-output dataset. It achieves the following results on the evaluation set: - Loss: 2.7168 - Accuracy: 0.0404 - Samples: 100 - Perplexity: 15.1318 - Table: <wandb.data_types.Table object at 0x7f8c9a3fb6d0> ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.2134 | 0.96 | 15 | 2.7168 | 0.0404 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
FacVain/turkish-sentiment-XMLRoBERTa
FacVain
2022-11-07T11:19:48Z
0
0
null
[ "tr", "region:us" ]
null
2022-11-07T09:57:24Z
--- language: tr tag: text-classification widget: - text: "Oldukça kullanışlı bir ürün." --- This repository contains two models that has been finetuned on twitter-XMLRoBERTa https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base. 3_Label model can classify text as positive, neutral and negative. 2_Label_Twitter is finetuned with tweets and can predict tweets as positive and negative.
silveto/distilbert-base-uncased-finetuned-squad
silveto
2022-11-07T10:44:30Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-11-02T17:43:38Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1531 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2297 | 1.0 | 5533 | 1.1547 | | 0.9688 | 2.0 | 11066 | 1.1278 | | 0.763 | 3.0 | 16599 | 1.1531 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.13.0 - Datasets 2.6.1 - Tokenizers 0.10.3
ronanki/all-mpnet-base-v2-2022-11-07
ronanki
2022-11-07T10:40:36Z
5
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-11-07T10:40:27Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # ronanki/all-mpnet-base-v2-2022-11-07 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('ronanki/all-mpnet-base-v2-2022-11-07') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ronanki/all-mpnet-base-v2-2022-11-07) ## Training The model was trained with the parameters: **DataLoader**: `sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 348 with parameters: ``` {'batch_size': 64} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 30, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1044, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
julien-c/avocado-prices
julien-c
2022-11-07T10:31:46Z
0
1
mlconsole
[ "mlconsole", "tabular-regression", "dataset:nateraw/avocado-prices", "license:apache-2.0", "model-index", "region:us" ]
tabular-regression
2022-10-13T08:34:56Z
--- license: apache-2.0 inference: false tags: - mlconsole - tabular-regression library_name: mlconsole metrics: - mae - loss datasets: - nateraw/avocado-prices model-index: - name: avocado-prices results: - task: type: tabular-regression name: tabular-regression dataset: type: nateraw/avocado-prices name: avocado.csv metrics: - type: mae name: Mean absolute error value: 0.22897861897945404 - type: loss name: Model loss value: 0.08849651366472244 --- # regression model trained on "nateraw/avocado-prices" 🤖 [Load and use this model](https://mlconsole.com/model/hf/julien-c/avocado-prices) in one click. 🧑‍💻 [Train your own model](https://mlconsole.com) on ML Console. ### Screenshots ![predict interface](screenshots/predict.png)
Shyam-311/distilgpt2-finetuned-wikitext2
Shyam-311
2022-11-07T09:55:19Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-07T09:08:03Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6421 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7602 | 1.0 | 2334 | 3.6669 | | 3.653 | 2.0 | 4668 | 3.6472 | | 3.6006 | 3.0 | 7002 | 3.6421 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
freepina/musika-hyperpop
freepina
2022-11-07T09:46:51Z
0
0
null
[ "audio", "music", "generation", "tensorflow", "arxiv:2208.08706", "license:mit", "region:us" ]
null
2022-11-07T09:46:18Z
--- license: mit tags: - audio - music - generation - tensorflow --- # Musika Model: musika_hyperpop ## Model provided by: freepina Pretrained musika_hyperpop model for the [Musika system](https://github.com/marcoppasini/musika) for fast infinite waveform music generation. Introduced in [this paper](https://arxiv.org/abs/2208.08706). ## How to use You can generate music from this pretrained musika_hyperpop model using the notebook available [here](https://colab.research.google.com/drive/1HJWliBXPi-Xlx3gY8cjFI5-xaZgrTD7r). ### Model description This pretrained GAN system consists of a ResNet-style generator and discriminator. During training, stability is controlled by adapting the strength of gradient penalty regularization on-the-fly. The gradient penalty weighting term is contained in *switch.npy*. The generator is conditioned on a latent coordinate system to produce samples of arbitrary length. The latent representations produced by the generator are then passed to a decoder which converts them into waveform audio. The generator has a context window of about 12 seconds of audio.
pig4431/Sentiment140_BERT_5E
pig4431
2022-11-07T08:46:38Z
10
1
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:sentiment140", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-07T08:39:06Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - sentiment140 metrics: - accuracy model-index: - name: Sentiment140_BERT_5E results: - task: name: Text Classification type: text-classification dataset: name: sentiment140 type: sentiment140 config: sentiment140 split: train args: sentiment140 metrics: - name: Accuracy type: accuracy value: 0.82 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Sentiment140_BERT_5E This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the sentiment140 dataset. It achieves the following results on the evaluation set: - Loss: 0.7061 - Accuracy: 0.82 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6882 | 0.08 | 50 | 0.6047 | 0.7 | | 0.6223 | 0.16 | 100 | 0.5137 | 0.8067 | | 0.5463 | 0.24 | 150 | 0.4573 | 0.8067 | | 0.4922 | 0.32 | 200 | 0.4790 | 0.8 | | 0.4821 | 0.4 | 250 | 0.4207 | 0.8267 | | 0.4985 | 0.48 | 300 | 0.4267 | 0.8067 | | 0.4455 | 0.56 | 350 | 0.4301 | 0.8133 | | 0.469 | 0.64 | 400 | 0.4294 | 0.82 | | 0.4906 | 0.72 | 450 | 0.4059 | 0.8067 | | 0.4006 | 0.8 | 500 | 0.4181 | 0.8133 | | 0.445 | 0.88 | 550 | 0.3948 | 0.8267 | | 0.4302 | 0.96 | 600 | 0.3976 | 0.84 | | 0.4442 | 1.04 | 650 | 0.3887 | 0.8533 | | 0.3424 | 1.12 | 700 | 0.4119 | 0.8267 | | 0.3589 | 1.2 | 750 | 0.4083 | 0.8533 | | 0.3737 | 1.28 | 800 | 0.4253 | 0.8333 | | 0.334 | 1.36 | 850 | 0.4147 | 0.86 | | 0.3637 | 1.44 | 900 | 0.3926 | 0.8533 | | 0.3388 | 1.52 | 950 | 0.4084 | 0.8267 | | 0.3375 | 1.6 | 1000 | 0.4132 | 0.8467 | | 0.3725 | 1.68 | 1050 | 0.3965 | 0.8467 | | 0.3649 | 1.76 | 1100 | 0.3956 | 0.8333 | | 0.3799 | 1.84 | 1150 | 0.3923 | 0.8333 | | 0.3695 | 1.92 | 1200 | 0.4266 | 0.84 | | 0.3233 | 2.0 | 1250 | 0.4225 | 0.8333 | | 0.2313 | 2.08 | 1300 | 0.4672 | 0.8333 | | 0.231 | 2.16 | 1350 | 0.5212 | 0.8133 | | 0.2526 | 2.24 | 1400 | 0.5392 | 0.8067 | | 0.2721 | 2.32 | 1450 | 0.4895 | 0.82 | | 0.2141 | 2.4 | 1500 | 0.5258 | 0.8133 | | 0.2658 | 2.48 | 1550 | 0.5046 | 0.8267 | | 0.2386 | 2.56 | 1600 | 0.4873 | 0.8267 | | 0.2493 | 2.64 | 1650 | 0.4950 | 0.8333 | | 0.2692 | 2.72 | 1700 | 0.5080 | 0.8267 | | 0.2226 | 2.8 | 1750 | 0.5016 | 0.8467 | | 0.2522 | 2.88 | 1800 | 0.5068 | 0.8267 | | 0.2556 | 2.96 | 1850 | 0.4937 | 0.8267 | | 0.2311 | 3.04 | 1900 | 0.5103 | 0.8267 | | 0.1703 | 3.12 | 1950 | 0.5680 | 0.82 | | 0.1744 | 3.2 | 2000 | 0.5501 | 0.82 | | 0.1667 | 3.28 | 2050 | 0.6142 | 0.82 | | 0.1863 | 3.36 | 2100 | 0.6355 | 0.82 | | 0.2543 | 3.44 | 2150 | 0.6000 | 0.8133 | | 0.1565 | 3.52 | 2200 | 0.6618 | 0.8267 | | 0.1531 | 3.6 | 2250 | 0.6595 | 0.8133 | | 0.1915 | 3.68 | 2300 | 0.6647 | 0.8267 | | 0.1601 | 3.76 | 2350 | 0.6729 | 0.8267 | | 0.176 | 3.84 | 2400 | 0.6699 | 0.82 | | 0.1815 | 3.92 | 2450 | 0.6819 | 0.8067 | | 0.1987 | 4.0 | 2500 | 0.6543 | 0.8333 | | 0.1236 | 4.08 | 2550 | 0.6686 | 0.8333 | | 0.1599 | 4.16 | 2600 | 0.6583 | 0.8267 | | 0.1256 | 4.24 | 2650 | 0.6871 | 0.8267 | | 0.1291 | 4.32 | 2700 | 0.6855 | 0.82 | | 0.1198 | 4.4 | 2750 | 0.6901 | 0.82 | | 0.1245 | 4.48 | 2800 | 0.7152 | 0.8267 | | 0.1784 | 4.56 | 2850 | 0.7053 | 0.82 | | 0.1705 | 4.64 | 2900 | 0.7016 | 0.82 | | 0.1265 | 4.72 | 2950 | 0.7013 | 0.82 | | 0.1192 | 4.8 | 3000 | 0.7084 | 0.82 | | 0.174 | 4.88 | 3050 | 0.7062 | 0.82 | | 0.1328 | 4.96 | 3100 | 0.7061 | 0.82 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
nsadeq/InformBERT
nsadeq
2022-11-07T08:42:44Z
5
1
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "arxiv:2210.11771", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-11-05T23:06:20Z
--- license: apache-2.0 --- # InformBERT ## Introduction InformBERT is pretrained using variable masking strategy, where informative tokens are masked more frequently compared to other tokens. InformBERT outperforms random masking based pretrained models on the factual recall benchmark LAMA and extractive question answering benchmark SQuAD. More detail: https://arxiv.org/abs/2210.11771 ## How to load ```Python from transformers import BertTokenizer, AutoModel tokenizer = BertTokenizer.from_pretrained("nsadeq/InformBERT") model = AutoModel.from_pretrained("nsadeq/InformBERT") from transformers import pipeline unmasker = pipeline('fill-mask', model='nsadeq/InformBERT',tokenizer=tokenizer) unmasker("SpeedWeek is an American television program on [MASK].") ``` ## Citation ```bibtex @misc{https://doi.org/10.48550/arxiv.2210.11771, doi = {10.48550/ARXIV.2210.11771}, url = {https://arxiv.org/abs/2210.11771}, author = {Sadeq, Nafis and Xu, Canwen and McAuley, Julian}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {InforMask: Unsupervised Informative Masking for Language Model Pretraining}, publisher = {arXiv}, year = {2022}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
pig4431/Sentiment140_ALBERT_5E
pig4431
2022-11-07T07:45:04Z
105
0
transformers
[ "transformers", "pytorch", "albert", "text-classification", "generated_from_trainer", "dataset:sentiment140", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-07T07:44:38Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - sentiment140 metrics: - accuracy model-index: - name: Sentiment140_ALBERT_5E results: - task: name: Text Classification type: text-classification dataset: name: sentiment140 type: sentiment140 config: sentiment140 split: train args: sentiment140 metrics: - name: Accuracy type: accuracy value: 0.8533333333333334 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Sentiment140_ALBERT_5E This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the sentiment140 dataset. It achieves the following results on the evaluation set: - Loss: 0.6103 - Accuracy: 0.8533 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6713 | 0.08 | 50 | 0.5704 | 0.7333 | | 0.5742 | 0.16 | 100 | 0.4620 | 0.8 | | 0.5104 | 0.24 | 150 | 0.5536 | 0.74 | | 0.5313 | 0.32 | 200 | 0.5198 | 0.76 | | 0.5023 | 0.4 | 250 | 0.4286 | 0.8 | | 0.4871 | 0.48 | 300 | 0.4294 | 0.8267 | | 0.4513 | 0.56 | 350 | 0.4349 | 0.8133 | | 0.4647 | 0.64 | 400 | 0.4046 | 0.8333 | | 0.4827 | 0.72 | 450 | 0.4218 | 0.8333 | | 0.4517 | 0.8 | 500 | 0.4093 | 0.82 | | 0.4417 | 0.88 | 550 | 0.3999 | 0.84 | | 0.4701 | 0.96 | 600 | 0.3779 | 0.8867 | | 0.397 | 1.04 | 650 | 0.3730 | 0.8667 | | 0.3377 | 1.12 | 700 | 0.3833 | 0.8333 | | 0.411 | 1.2 | 750 | 0.3704 | 0.84 | | 0.3796 | 1.28 | 800 | 0.3472 | 0.86 | | 0.3523 | 1.36 | 850 | 0.3512 | 0.8733 | | 0.3992 | 1.44 | 900 | 0.3712 | 0.84 | | 0.3641 | 1.52 | 950 | 0.3718 | 0.82 | | 0.3973 | 1.6 | 1000 | 0.3508 | 0.84 | | 0.3576 | 1.68 | 1050 | 0.3600 | 0.86 | | 0.3701 | 1.76 | 1100 | 0.3287 | 0.8667 | | 0.3721 | 1.84 | 1150 | 0.3794 | 0.82 | | 0.3673 | 1.92 | 1200 | 0.3378 | 0.8733 | | 0.4223 | 2.0 | 1250 | 0.3508 | 0.86 | | 0.2745 | 2.08 | 1300 | 0.3835 | 0.86 | | 0.283 | 2.16 | 1350 | 0.3500 | 0.8533 | | 0.2769 | 2.24 | 1400 | 0.3334 | 0.8733 | | 0.2491 | 2.32 | 1450 | 0.3519 | 0.8867 | | 0.3237 | 2.4 | 1500 | 0.3438 | 0.86 | | 0.2662 | 2.48 | 1550 | 0.3513 | 0.8667 | | 0.2423 | 2.56 | 1600 | 0.3413 | 0.8867 | | 0.2655 | 2.64 | 1650 | 0.3126 | 0.8933 | | 0.2516 | 2.72 | 1700 | 0.3333 | 0.8733 | | 0.252 | 2.8 | 1750 | 0.3316 | 0.88 | | 0.2872 | 2.88 | 1800 | 0.3227 | 0.9 | | 0.306 | 2.96 | 1850 | 0.3383 | 0.8733 | | 0.248 | 3.04 | 1900 | 0.3474 | 0.8733 | | 0.1507 | 3.12 | 1950 | 0.4140 | 0.8667 | | 0.1994 | 3.2 | 2000 | 0.3729 | 0.8533 | | 0.167 | 3.28 | 2050 | 0.3782 | 0.8867 | | 0.1872 | 3.36 | 2100 | 0.4352 | 0.8867 | | 0.1611 | 3.44 | 2150 | 0.4511 | 0.8667 | | 0.2338 | 3.52 | 2200 | 0.4244 | 0.8533 | | 0.1538 | 3.6 | 2250 | 0.4226 | 0.8733 | | 0.1561 | 3.68 | 2300 | 0.4126 | 0.88 | | 0.2156 | 3.76 | 2350 | 0.4382 | 0.86 | | 0.1684 | 3.84 | 2400 | 0.4969 | 0.86 | | 0.1917 | 3.92 | 2450 | 0.4439 | 0.8667 | | 0.1584 | 4.0 | 2500 | 0.4759 | 0.86 | | 0.1038 | 4.08 | 2550 | 0.5042 | 0.8667 | | 0.0983 | 4.16 | 2600 | 0.5527 | 0.8533 | | 0.1404 | 4.24 | 2650 | 0.5801 | 0.84 | | 0.0844 | 4.32 | 2700 | 0.5884 | 0.86 | | 0.1347 | 4.4 | 2750 | 0.5865 | 0.8467 | | 0.1373 | 4.48 | 2800 | 0.5915 | 0.8533 | | 0.1506 | 4.56 | 2850 | 0.5976 | 0.8467 | | 0.1007 | 4.64 | 2900 | 0.6678 | 0.82 | | 0.1311 | 4.72 | 2950 | 0.6082 | 0.8533 | | 0.1402 | 4.8 | 3000 | 0.6180 | 0.8467 | | 0.1363 | 4.88 | 3050 | 0.6107 | 0.8533 | | 0.0995 | 4.96 | 3100 | 0.6103 | 0.8533 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0 - Datasets 2.3.2 - Tokenizers 0.13.1
ntsema/wav2vec2-xlsr-53-espeak-cv-ft-sah-ntsema-colab
ntsema
2022-11-07T07:24:16Z
132
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:audiofolder", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-07T04:24:31Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - audiofolder metrics: - wer model-index: - name: wav2vec2-xlsr-53-espeak-cv-ft-sah-ntsema-colab results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: audiofolder type: audiofolder config: default split: train args: default metrics: - name: Wer type: wer value: 0.2246858832224686 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xlsr-53-espeak-cv-ft-sah-ntsema-colab This model is a fine-tuned version of [facebook/wav2vec2-xlsr-53-espeak-cv-ft](https://huggingface.co/facebook/wav2vec2-xlsr-53-espeak-cv-ft) on the audiofolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2143 - Wer: 0.2247 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.7431 | 5.71 | 400 | 0.2879 | 0.4054 | | 0.1876 | 11.42 | 800 | 0.2349 | 0.3023 | | 0.0986 | 17.14 | 1200 | 0.2248 | 0.2701 | | 0.0737 | 22.85 | 1600 | 0.2242 | 0.2428 | | 0.0546 | 28.57 | 2000 | 0.2143 | 0.2247 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.14.0.dev20221105+cu116 - Datasets 2.6.1 - Tokenizers 0.13.1
pig4431/Sentiment140_DistilBERT_5E
pig4431
2022-11-07T07:15:51Z
37
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:sentiment140", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-07T07:10:58Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - sentiment140 metrics: - accuracy model-index: - name: Sentiment140_DistilBERT_5E results: - task: name: Text Classification type: text-classification dataset: name: sentiment140 type: sentiment140 config: sentiment140 split: train args: sentiment140 metrics: - name: Accuracy type: accuracy value: 0.8333333333333334 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Sentiment140_DistilBERT_5E This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the sentiment140 dataset. It achieves the following results on the evaluation set: - Loss: 0.4897 - Accuracy: 0.8333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6784 | 0.08 | 50 | 0.6516 | 0.6933 | | 0.6301 | 0.16 | 100 | 0.5384 | 0.7533 | | 0.5438 | 0.24 | 150 | 0.4559 | 0.8 | | 0.4625 | 0.32 | 200 | 0.4287 | 0.8133 | | 0.4528 | 0.4 | 250 | 0.4056 | 0.8267 | | 0.4609 | 0.48 | 300 | 0.3883 | 0.8333 | | 0.4705 | 0.56 | 350 | 0.3886 | 0.8067 | | 0.4539 | 0.64 | 400 | 0.3967 | 0.82 | | 0.4483 | 0.72 | 450 | 0.3758 | 0.82 | | 0.4699 | 0.8 | 500 | 0.4003 | 0.8133 | | 0.467 | 0.88 | 550 | 0.4021 | 0.8267 | | 0.454 | 0.96 | 600 | 0.3735 | 0.8333 | | 0.4227 | 1.04 | 650 | 0.3840 | 0.8267 | | 0.3584 | 1.12 | 700 | 0.3775 | 0.8333 | | 0.3618 | 1.2 | 750 | 0.4026 | 0.8267 | | 0.3634 | 1.28 | 800 | 0.3891 | 0.8133 | | 0.3751 | 1.36 | 850 | 0.3895 | 0.8267 | | 0.3484 | 1.44 | 900 | 0.3919 | 0.8267 | | 0.3764 | 1.52 | 950 | 0.3770 | 0.84 | | 0.3488 | 1.6 | 1000 | 0.4028 | 0.82 | | 0.3665 | 1.68 | 1050 | 0.3779 | 0.8333 | | 0.3925 | 1.76 | 1100 | 0.3726 | 0.84 | | 0.3624 | 1.84 | 1150 | 0.3655 | 0.84 | | 0.3876 | 1.92 | 1200 | 0.3648 | 0.8133 | | 0.3935 | 2.0 | 1250 | 0.3633 | 0.8467 | | 0.2944 | 2.08 | 1300 | 0.3808 | 0.8333 | | 0.2957 | 2.16 | 1350 | 0.3836 | 0.8333 | | 0.266 | 2.24 | 1400 | 0.3940 | 0.8267 | | 0.2747 | 2.32 | 1450 | 0.3952 | 0.84 | | 0.314 | 2.4 | 1500 | 0.4060 | 0.8133 | | 0.3419 | 2.48 | 1550 | 0.4025 | 0.8133 | | 0.2782 | 2.56 | 1600 | 0.4218 | 0.82 | | 0.3218 | 2.64 | 1650 | 0.4039 | 0.8333 | | 0.2863 | 2.72 | 1700 | 0.4130 | 0.8267 | | 0.3336 | 2.8 | 1750 | 0.4026 | 0.8133 | | 0.3224 | 2.88 | 1800 | 0.3910 | 0.8267 | | 0.2709 | 2.96 | 1850 | 0.3979 | 0.84 | | 0.2701 | 3.04 | 1900 | 0.4127 | 0.8333 | | 0.2782 | 3.12 | 1950 | 0.4335 | 0.82 | | 0.2425 | 3.2 | 2000 | 0.4229 | 0.8333 | | 0.2457 | 3.28 | 2050 | 0.4168 | 0.8333 | | 0.217 | 3.36 | 2100 | 0.4264 | 0.8267 | | 0.2522 | 3.44 | 2150 | 0.4250 | 0.8333 | | 0.2402 | 3.52 | 2200 | 0.4371 | 0.8333 | | 0.2465 | 3.6 | 2250 | 0.4429 | 0.8333 | | 0.2427 | 3.68 | 2300 | 0.4435 | 0.8333 | | 0.2408 | 3.76 | 2350 | 0.4500 | 0.84 | | 0.1976 | 3.84 | 2400 | 0.4536 | 0.8333 | | 0.23 | 3.92 | 2450 | 0.4645 | 0.8333 | | 0.2449 | 4.0 | 2500 | 0.4557 | 0.8467 | | 0.1933 | 4.08 | 2550 | 0.4672 | 0.84 | | 0.213 | 4.16 | 2600 | 0.4717 | 0.84 | | 0.1772 | 4.24 | 2650 | 0.4843 | 0.8267 | | 0.1917 | 4.32 | 2700 | 0.4690 | 0.8467 | | 0.2094 | 4.4 | 2750 | 0.4728 | 0.8467 | | 0.1903 | 4.48 | 2800 | 0.4755 | 0.8467 | | 0.2541 | 4.56 | 2850 | 0.4791 | 0.84 | | 0.1805 | 4.64 | 2900 | 0.4877 | 0.84 | | 0.2183 | 4.72 | 2950 | 0.4940 | 0.8267 | | 0.2257 | 4.8 | 3000 | 0.4905 | 0.8333 | | 0.2496 | 4.88 | 3050 | 0.4883 | 0.84 | | 0.1846 | 4.96 | 3100 | 0.4897 | 0.8333 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
pig4431/Sentiment140_ELECTRA_5E
pig4431
2022-11-07T07:08:03Z
7
1
transformers
[ "transformers", "pytorch", "electra", "text-classification", "generated_from_trainer", "dataset:sentiment140", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-07T07:06:08Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - sentiment140 metrics: - accuracy model-index: - name: Sentiment140_ELECTRA_5E results: - task: name: Text Classification type: text-classification dataset: name: sentiment140 type: sentiment140 config: sentiment140 split: train args: sentiment140 metrics: - name: Accuracy type: accuracy value: 0.84 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Sentiment140_ELECTRA_5E This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the sentiment140 dataset. It achieves the following results on the evaluation set: - Loss: 0.5410 - Accuracy: 0.84 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6896 | 0.08 | 50 | 0.6605 | 0.7133 | | 0.6664 | 0.16 | 100 | 0.6054 | 0.7133 | | 0.5915 | 0.24 | 150 | 0.4777 | 0.8333 | | 0.5053 | 0.32 | 200 | 0.4735 | 0.7733 | | 0.4946 | 0.4 | 250 | 0.3847 | 0.8267 | | 0.4578 | 0.48 | 300 | 0.4025 | 0.8067 | | 0.4724 | 0.56 | 350 | 0.3642 | 0.8333 | | 0.4309 | 0.64 | 400 | 0.3762 | 0.86 | | 0.4818 | 0.72 | 450 | 0.3829 | 0.84 | | 0.416 | 0.8 | 500 | 0.3599 | 0.8467 | | 0.4201 | 0.88 | 550 | 0.3469 | 0.8533 | | 0.3664 | 0.96 | 600 | 0.3462 | 0.8467 | | 0.4289 | 1.04 | 650 | 0.3470 | 0.86 | | 0.3859 | 1.12 | 700 | 0.3440 | 0.8533 | | 0.3599 | 1.2 | 750 | 0.3475 | 0.8533 | | 0.3287 | 1.28 | 800 | 0.3524 | 0.8467 | | 0.3331 | 1.36 | 850 | 0.3475 | 0.8733 | | 0.3236 | 1.44 | 900 | 0.3657 | 0.8467 | | 0.3502 | 1.52 | 950 | 0.3525 | 0.84 | | 0.3702 | 1.6 | 1000 | 0.3655 | 0.8333 | | 0.3323 | 1.68 | 1050 | 0.3405 | 0.84 | | 0.3452 | 1.76 | 1100 | 0.3376 | 0.8533 | | 0.3742 | 1.84 | 1150 | 0.3481 | 0.8533 | | 0.3145 | 1.92 | 1200 | 0.3472 | 0.86 | | 0.3657 | 2.0 | 1250 | 0.3302 | 0.8733 | | 0.2601 | 2.08 | 1300 | 0.3612 | 0.86 | | 0.2954 | 2.16 | 1350 | 0.3640 | 0.8533 | | 0.2888 | 2.24 | 1400 | 0.3670 | 0.8467 | | 0.2572 | 2.32 | 1450 | 0.4118 | 0.84 | | 0.2955 | 2.4 | 1500 | 0.3811 | 0.86 | | 0.2431 | 2.48 | 1550 | 0.4221 | 0.84 | | 0.318 | 2.56 | 1600 | 0.3844 | 0.8467 | | 0.2615 | 2.64 | 1650 | 0.4109 | 0.8333 | | 0.2389 | 2.72 | 1700 | 0.4420 | 0.8467 | | 0.2983 | 2.8 | 1750 | 0.4203 | 0.8467 | | 0.2828 | 2.88 | 1800 | 0.3629 | 0.8733 | | 0.2897 | 2.96 | 1850 | 0.3916 | 0.8733 | | 0.2239 | 3.04 | 1900 | 0.4143 | 0.86 | | 0.2093 | 3.12 | 1950 | 0.4521 | 0.84 | | 0.2438 | 3.2 | 2000 | 0.4271 | 0.8467 | | 0.2282 | 3.28 | 2050 | 0.4548 | 0.8333 | | 0.1918 | 3.36 | 2100 | 0.4533 | 0.86 | | 0.1698 | 3.44 | 2150 | 0.5177 | 0.84 | | 0.2765 | 3.52 | 2200 | 0.4884 | 0.84 | | 0.2282 | 3.6 | 2250 | 0.4697 | 0.8533 | | 0.239 | 3.68 | 2300 | 0.4766 | 0.8533 | | 0.2219 | 3.76 | 2350 | 0.4628 | 0.8533 | | 0.2375 | 3.84 | 2400 | 0.4704 | 0.8533 | | 0.1883 | 3.92 | 2450 | 0.4744 | 0.84 | | 0.2049 | 4.0 | 2500 | 0.4977 | 0.84 | | 0.1958 | 4.08 | 2550 | 0.4906 | 0.84 | | 0.1656 | 4.16 | 2600 | 0.5219 | 0.8333 | | 0.1543 | 4.24 | 2650 | 0.5379 | 0.8333 | | 0.2082 | 4.32 | 2700 | 0.5107 | 0.84 | | 0.1724 | 4.4 | 2750 | 0.5208 | 0.84 | | 0.1778 | 4.48 | 2800 | 0.5238 | 0.84 | | 0.1914 | 4.56 | 2850 | 0.5325 | 0.84 | | 0.2436 | 4.64 | 2900 | 0.5279 | 0.84 | | 0.1662 | 4.72 | 2950 | 0.5295 | 0.84 | | 0.1288 | 4.8 | 3000 | 0.5392 | 0.84 | | 0.2087 | 4.88 | 3050 | 0.5409 | 0.84 | | 0.1612 | 4.96 | 3100 | 0.5410 | 0.84 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0 - Datasets 2.3.2 - Tokenizers 0.13.1
fimu-docproc-research/master_0.0.1_DoctrOcrEngine
fimu-docproc-research
2022-11-07T06:00:27Z
5
0
transformers
[ "transformers", "pytorch", "cz", "endpoints_compatible", "region:us" ]
null
2022-11-06T20:56:46Z
--- language: cz --- **Optical Character Recognition made seamless & accessible to anyone, powered by PyTorch** ## Task: recognition ### Example usage: ```python >>> from doctr.io import DocumentFile >>> from doctr.models import ocr_predictor, from_hub >>> img = DocumentFile.from_images(['<image_path>']) >>> # Load your model from the hub >>> model = from_hub('mindee/my-model') >>> # Pass it to the predictor >>> # If your model is a recognition model: >>> predictor = ocr_predictor(det_arch='db_resnet50', >>> reco_arch=model, >>> pretrained=True) >>> # Get your predictions >>> res = predictor(img) ``` Training configuration and logs: https://wandb.ai/xbankov/text-recognition ### Run Configuration { "hf_dataset_name": "fimu-docproc-research/born_digital", "name": "master_20221106-223158", "epochs": 50, "lr": 0.001, "weight_decay": 0, "batch_size": 512, "input_size": 32, "sched": "cosine", "sample": null, "workers": 16, "wb": true, "push_to_hub": "fimu-docproc-research/master_0.0.1", "test_only": false, "arch": "master" }
KarmicPumpkin/Rouge-the-bat-dreambooth
KarmicPumpkin
2022-11-07T05:24:53Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2022-11-07T04:06:25Z
--- license: creativeml-openrail-m --- SD v1.5 model trained using Fast-sd on 44 images of Rouge the Bat for 1600/2400 steps. Use keyword 'rkugasebz' to generate Rouge the Bat in outputs. Output samples here: https://www.kpgametour.com/2022/11/training-ai-and-tips-using-fast-stable.html
tkubotake/xlm-roberta-base-finetuned-panx-en
tkubotake
2022-11-07T05:12:03Z
6
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-11-07T03:30:35Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-en results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.en split: train args: PAN-X.en metrics: - name: F1 type: f1 value: 0.7580275229357799 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [tkubotake/xlm-roberta-base-finetuned-panx-de](https://huggingface.co/tkubotake/xlm-roberta-base-finetuned-panx-de) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.5430 - F1: 0.7580 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1318 | 1.0 | 50 | 0.4145 | 0.7557 | | 0.0589 | 2.0 | 100 | 0.5016 | 0.7524 | | 0.0314 | 3.0 | 150 | 0.5430 | 0.7580 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
tkubotake/xlm-roberta-base-finetuned-panx-it
tkubotake
2022-11-07T04:56:08Z
8
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-11-07T03:14:56Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-it results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.it split: train args: PAN-X.it metrics: - name: F1 type: f1 value: 0.8602239734549979 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [tkubotake/xlm-roberta-base-finetuned-panx-de](https://huggingface.co/tkubotake/xlm-roberta-base-finetuned-panx-de) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2762 - F1: 0.8602 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1073 | 1.0 | 70 | 0.2783 | 0.8554 | | 0.0728 | 2.0 | 140 | 0.2651 | 0.8605 | | 0.0409 | 3.0 | 210 | 0.2762 | 0.8602 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
jinhybr/OCR-LayoutLMv3
jinhybr
2022-11-07T04:49:32Z
27
0
transformers
[ "transformers", "pytorch", "tensorboard", "layoutlmv3", "token-classification", "generated_from_trainer", "dataset:funsd-layoutlmv3", "arxiv:2204.08387", "license:cc-by-nc-sa-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-11-05T01:36:12Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer datasets: - funsd-layoutlmv3 metrics: - precision - recall - f1 - accuracy model-index: - name: OCR-LayoutLMv3 results: - task: name: Token Classification type: token-classification dataset: name: funsd-layoutlmv3 type: funsd-layoutlmv3 config: funsd split: train args: funsd metrics: - name: Precision type: precision value: 0.8988653182042428 - name: Recall type: recall value: 0.905116741182315 - name: F1 type: f1 value: 0.9019801980198019 - name: Accuracy type: accuracy value: 0.8403661000832046 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # OCR-LayoutLMv3 This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the funsd-layoutlmv3 dataset. It achieves the following results on the evaluation set: - Loss: 0.9788 - Precision: 0.8989 - Recall: 0.9051 - F1: 0.9020 - Accuracy: 0.8404 ## Model description LayoutLMv3 is a pre-trained multimodal Transformer for Document AI with unified text and image masking. The simple unified architecture and training objectives make LayoutLMv3 a general-purpose pre-trained model. For example, LayoutLMv3 can be fine-tuned for both text-centric tasks, including form understanding, receipt understanding, and document visual question answering, and image-centric tasks such as document image classification and document layout analysis. [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei, Preprint 2022. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 2000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.33 | 100 | 0.6966 | 0.7418 | 0.8063 | 0.7727 | 0.7801 | | No log | 2.67 | 200 | 0.5767 | 0.8104 | 0.8644 | 0.8365 | 0.8117 | | No log | 4.0 | 300 | 0.5355 | 0.8246 | 0.8852 | 0.8539 | 0.8295 | | No log | 5.33 | 400 | 0.5240 | 0.8706 | 0.8922 | 0.8813 | 0.8427 | | 0.5326 | 6.67 | 500 | 0.6337 | 0.8528 | 0.8778 | 0.8651 | 0.8260 | | 0.5326 | 8.0 | 600 | 0.6870 | 0.8698 | 0.8828 | 0.8762 | 0.8240 | | 0.5326 | 9.33 | 700 | 0.6584 | 0.8723 | 0.9061 | 0.8889 | 0.8342 | | 0.5326 | 10.67 | 800 | 0.7186 | 0.8868 | 0.9031 | 0.8949 | 0.8335 | | 0.5326 | 12.0 | 900 | 0.6822 | 0.9040 | 0.9076 | 0.9058 | 0.8526 | | 0.1248 | 13.33 | 1000 | 0.7042 | 0.8872 | 0.9021 | 0.8946 | 0.8511 | | 0.1248 | 14.67 | 1100 | 0.7920 | 0.9027 | 0.9036 | 0.9032 | 0.8480 | | 0.1248 | 16.0 | 1200 | 0.8052 | 0.8964 | 0.9151 | 0.9056 | 0.8389 | | 0.1248 | 17.33 | 1300 | 0.8932 | 0.8995 | 0.9066 | 0.9030 | 0.8329 | | 0.1248 | 18.67 | 1400 | 0.8728 | 0.8950 | 0.9061 | 0.9005 | 0.8398 | | 0.0442 | 20.0 | 1500 | 0.9051 | 0.8960 | 0.9116 | 0.9037 | 0.8347 | | 0.0442 | 21.33 | 1600 | 0.9587 | 0.8947 | 0.9031 | 0.8989 | 0.8401 | | 0.0442 | 22.67 | 1700 | 0.9822 | 0.9042 | 0.9046 | 0.9044 | 0.8389 | | 0.0442 | 24.0 | 1800 | 0.9734 | 0.9043 | 0.9061 | 0.9052 | 0.8391 | | 0.0442 | 25.33 | 1900 | 0.9842 | 0.9042 | 0.9091 | 0.9066 | 0.8410 | | 0.0225 | 26.67 | 2000 | 0.9788 | 0.8989 | 0.9051 | 0.9020 | 0.8404 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.13.1
maustin10/my-awesome-setfit-model4
maustin10
2022-11-07T03:33:56Z
2
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-11-07T03:33:43Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 40 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 40, "warmup_steps": 4, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
salascorp/categorizacion_comercios_v_0.0.7
salascorp
2022-11-07T03:24:01Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-07T02:51:40Z
--- license: apache-2.0 tags: - text-classification - generated_from_trainer metrics: - accuracy model-index: - name: categorizacion_comercios_v_0.0.7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # categorizacion_comercios_v_0.0.7 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the datasetX dataset. It achieves the following results on the evaluation set: - Loss: 0.4673 - Accuracy: 0.9125 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.13.0+cpu - Datasets 2.6.1 - Tokenizers 0.13.1
Formzu/bart-base-japanese
Formzu
2022-11-07T02:13:39Z
7
2
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "bart", "ja", "dataset:wikipedia", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-31T06:52:38Z
--- language: - ja license: mit tags: - bart - pytorch datasets: - wikipedia --- # bart-base-japanese This model is converted from the original [Japanese BART Pretrained model](https://nlp.ist.i.kyoto-u.ac.jp/?BART%E6%97%A5%E6%9C%AC%E8%AA%9EPretrained%E3%83%A2%E3%83%87%E3%83%AB) released by Kyoto University. Both the encoder and decoder outputs are identical to the original Fairseq model. ### How to use the model The input text should be tokenized by [BartJapaneseTokenizer](https://huggingface.co/Formzu/bart-base-japanese/blob/main/tokenization_bart_japanese.py). Tokenizer requirements: * [Juman++](https://github.com/ku-nlp/jumanpp) * [zenhan](https://pypi.org/project/zenhan/) * [pyknp](https://pypi.org/project/pyknp/) * [sentencepiece](https://pypi.org/project/sentencepiece/) #### Simple FillMaskPipeline ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline model_name = "Formzu/bart-base-japanese" model = AutoModelForSeq2SeqLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) masked_text = "天気が<mask>から散歩しましょう。" fill_mask = pipeline("fill-mask", model=model, tokenizer=tokenizer) out = fill_mask(masked_text) print(out) # [{'score': 0.19255658984184265, 'token': 1718, 'token_str': 'よく', 'sequence': '天気 が よく から 散歩 し ましょう 。'}, # {'score': 0.14426815509796143, 'token': 5478, 'token_str': '良く', 'sequence': '天気 が 良く から 散歩 し ましょう 。'}, # {'score': 0.05554169788956642, 'token': 6561, 'token_str': '悪い', 'sequence': '天気 が 悪い から 散歩 し ましょう 。'}, # {'score': 0.05524599179625511, 'token': 3553, 'token_str': '良い', 'sequence': '天気 が 良い から 散歩 し ましょう 。'}, # {'score': 0.03720080852508545, 'token': 1370, 'token_str': '良', 'sequence': '天気 が 良 から 散歩 し ましょう 。'}] ``` #### Text Generation ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer import torch device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model_name = "Formzu/bart-base-japanese" model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(device) tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) masked_text = "天気が<mask>から散歩しましょう。" inp = tokenizer(masked_text, return_tensors='pt').to(device) out = model.generate(**inp, num_beams=1, min_length=0, max_length=20, early_stopping=True, no_repeat_ngram_size=2) res = "".join(tokenizer.decode(out.squeeze(0).tolist(), skip_special_tokens=True).split(" ")) print(res) # 天気がよくなってから散歩しましょう。天気のよく合っているところにいる ``` ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu116 - Tokenizers 0.12.1
sd-concepts-library/ettblackteapot
sd-concepts-library
2022-11-07T00:41:53Z
0
1
null
[ "license:mit", "region:us" ]
null
2022-11-07T00:41:41Z
--- license: mit --- ### EttBlackTeapot on Stable Diffusion This is the `<my-teapot>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<my-teapot> 0](https://huggingface.co/sd-concepts-library/ettblackteapot/resolve/main/concept_images/6.jpeg) ![<my-teapot> 1](https://huggingface.co/sd-concepts-library/ettblackteapot/resolve/main/concept_images/2.jpeg) ![<my-teapot> 2](https://huggingface.co/sd-concepts-library/ettblackteapot/resolve/main/concept_images/0.jpeg) ![<my-teapot> 3](https://huggingface.co/sd-concepts-library/ettblackteapot/resolve/main/concept_images/8.jpeg) ![<my-teapot> 4](https://huggingface.co/sd-concepts-library/ettblackteapot/resolve/main/concept_images/3.jpeg) ![<my-teapot> 5](https://huggingface.co/sd-concepts-library/ettblackteapot/resolve/main/concept_images/5.jpeg) ![<my-teapot> 6](https://huggingface.co/sd-concepts-library/ettblackteapot/resolve/main/concept_images/4.jpeg) ![<my-teapot> 7](https://huggingface.co/sd-concepts-library/ettblackteapot/resolve/main/concept_images/9.jpeg) ![<my-teapot> 8](https://huggingface.co/sd-concepts-library/ettblackteapot/resolve/main/concept_images/1.jpeg) ![<my-teapot> 9](https://huggingface.co/sd-concepts-library/ettblackteapot/resolve/main/concept_images/7.jpeg)
huggingtweets/thebuddha_3
huggingtweets
2022-11-07T00:16:25Z
107
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-07T00:16:16Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1421008625095647234/Vfg52xtV_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Buddha</div> <div style="text-align: center; font-size: 14px;">@thebuddha_3</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Buddha. | Data | Buddha | | --- | --- | | Tweets downloaded | 3200 | | Retweets | 138 | | Short tweets | 695 | | Tweets kept | 2367 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/14lqj1g8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @thebuddha_3's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3rpocant) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3rpocant/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/thebuddha_3') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
BigSalmon/InformalToFormalLincoln89Paraphrase
BigSalmon
2022-11-06T22:11:56Z
163
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-01T03:15:59Z
data: https://github.com/BigSalmon2/InformalToFormalDataset Text Generation Informal Formal ``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln88Paraphrase") model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln88Paraphrase") ``` ``` Demo: https://huggingface.co/spaces/BigSalmon/FormalInformalConciseWordy ``` ``` prompt = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln:""" input_ids = tokenizer.encode(prompt, return_tensors='pt') outputs = model.generate(input_ids=input_ids, max_length=10 + len(prompt), temperature=1.0, top_k=50, top_p=0.95, do_sample=True, num_return_sequences=5, early_stopping=True) for i in range(5): print(tokenizer.decode(outputs[i])) ``` Most likely outputs (Disclaimer: I highly recommend using this over just generating): ``` prompt = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln:""" text = tokenizer.encode(prompt) myinput, past_key_values = torch.tensor([text]), None myinput = myinput myinput= myinput.to(device) logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False) logits = logits[0,-1] probabilities = torch.nn.functional.softmax(logits) best_logits, best_indices = logits.topk(250) best_words = [tokenizer.decode([idx.item()]) for idx in best_indices] text.append(best_indices[0].item()) best_probabilities = probabilities[best_indices].tolist() words = [] print(best_words) ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. *** informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. *** informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ``` ``` original: microsoft word's [MASK] pricing invites competition. Translated into the Style of Abraham Lincoln: microsoft word's unconscionable pricing invites competition. *** original: the library’s quiet atmosphere encourages visitors to [blank] in their work. Translated into the Style of Abraham Lincoln: the library’s quiet atmosphere encourages visitors to immerse themselves in their work. ``` ``` Essay Intro (Warriors vs. Rockets in Game 7): text: eagerly anticipated by fans, game 7's are the highlight of the post-season. text: ever-building in suspense, game 7's have the crowd captivated. *** Essay Intro (South Korean TV Is Becoming Popular): text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ). text: increasingly held in critical esteem, south korean television continues to impress. text: at the forefront of quality content, south korea is quickly achieving celebrity status. *** Essay Intro ( ``` ``` Search: What is the definition of Checks and Balances? https://en.wikipedia.org/wiki/Checks_and_balances Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate. https://www.harvard.edu/glossary/Checks_and_Balances Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power https://www.law.cornell.edu/library/constitution/Checks_and_Balances Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power. *** Search: What is the definition of Separation of Powers? https://en.wikipedia.org/wiki/Separation_of_powers The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power. https://www.yale.edu/tcf/Separation_of_Powers.html Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined. *** Search: What is the definition of Connection of Powers? https://en.wikipedia.org/wiki/Connection_of_powers Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches. https://simple.wikipedia.org/wiki/Connection_of_powers The term Connection of Powers describes a system of government in which there is overlap between different parts of the government. *** Search: What is the definition of ``` ``` Search: What are phrase synonyms for "second-guess"? https://www.powerthesaurus.org/second-guess/synonyms Shortest to Longest: - feel dubious about - raise an eyebrow at - wrinkle their noses at - cast a jaundiced eye at - teeter on the fence about *** Search: What are phrase synonyms for "mean to newbies"? https://www.powerthesaurus.org/mean_to_newbies/synonyms Shortest to Longest: - readiness to balk at rookies - absence of tolerance for novices - hostile attitude toward newcomers *** Search: What are phrase synonyms for "make use of"? https://www.powerthesaurus.org/make_use_of/synonyms Shortest to Longest: - call upon - glean value from - reap benefits from - derive utility from - seize on the merits of - draw on the strength of - tap into the potential of *** Search: What are phrase synonyms for "hurting itself"? https://www.powerthesaurus.org/hurting_itself/synonyms Shortest to Longest: - erring - slighting itself - forfeiting its integrity - doing itself a disservice - evincing a lack of backbone *** Search: What are phrase synonyms for " ``` ``` - nebraska - unicamerical legislature - different from federal house and senate text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate. *** - penny has practically no value - should be taken out of circulation - just as other coins have been in us history - lost use - value not enough - to make environmental consequences worthy text: all but valueless, the penny should be retired. as with other coins in american history, it has become defunct. too minute to warrant the environmental consequences of its production, it has outlived its usefulness. *** - ``` ``` original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick. infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick. *** original: ``` ``` wordy: classical music is becoming less popular more and more. Translate into Concise Text: interest in classic music is fading. *** wordy: ``` ``` sweet: savvy voters ousted him. longer: voters who were informed delivered his defeat. *** sweet: ``` ``` 1: commercial space company spacex plans to launch a whopping 52 flights in 2022. 2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022. 3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights. 4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company. 5: a commercial space company, spacex aims to conduct 52 flights in 2022. *** 1: ``` Keywords to sentences or sentence. ``` ngos are characterized by: □ voluntary citizens' group that is organized on a local, national or international level □ encourage political participation □ often serve humanitarian functions □ work for social, economic, or environmental change *** what are the drawbacks of living near an airbnb? □ noise □ parking □ traffic □ security □ strangers *** ``` ``` original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung. adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung. *** original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark. adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark. *** original: ``` ``` original: had trouble deciding. translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation. *** original: ``` ``` input: not loyal 1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ). *** input: ``` ``` first: ( was complicit in / was involved in ). antonym: ( was blameless / was not an accomplice to / had no hand in / was uninvolved in ). *** first: ( have no qualms about / see no issue with ). antonym: ( are deeply troubled by / harbor grave reservations about / have a visceral aversion to / take ( umbrage at / exception to ) / are wary of ). *** first: ( do not see eye to eye / disagree often ). antonym: ( are in sync / are united / have excellent rapport / are like-minded / are in step / are of one mind / are in lockstep / operate in perfect harmony / march in lockstep ). *** first: ``` ``` stiff with competition, law school {A} is the launching pad for countless careers, {B} is a crowded field, {C} ranks among the most sought-after professional degrees, {D} is a professional proving ground. *** languishing in viewership, saturday night live {A} is due for a creative renaissance, {B} is no longer a ratings juggernaut, {C} has been eclipsed by its imitators, {C} can still find its mojo. *** dubbed the "manhattan of the south," atlanta {A} is a bustling metropolis, {B} is known for its vibrant downtown, {C} is a city of rich history, {D} is the pride of georgia. *** embattled by scandal, harvard {A} is feeling the heat, {B} cannot escape the media glare, {C} is facing its most intense scrutiny yet, {D} is in the spotlight for all the wrong reasons. ``` Infill / Infilling / Masking / Phrase Masking (Works pretty decently actually, especially when you use logprobs code from above): ``` his contention [blank] by the evidence [sep] was refuted [answer] *** few sights are as [blank] new york city as the colorful, flashing signage of its bodegas [sep] synonymous with [answer] *** when rick won the lottery, all of his distant relatives [blank] his winnings [sep] clamored for [answer] *** the library’s quiet atmosphere encourages visitors to [blank] in their work [sep] immerse themselves [answer] *** the joy of sport is that no two games are alike. for every exhilarating experience, however, there is an interminable one. the national pastime, unfortunately, has a penchant for the latter. what begins as a summer evening at the ballpark can quickly devolve into a game of tedium. the primary culprit is the [blank] of play. from batters readjusting their gloves to fielders spitting on their mitts, the action is [blank] unnecessary interruptions. the sport's future is [blank] if these tendencies are not addressed [sep] plodding pace [answer] riddled with [answer] bleak [answer] *** microsoft word's [blank] pricing [blank] competition [sep] unconscionable [answer] invites [answer] *** ``` ``` original: microsoft word's [MASK] pricing invites competition. Translated into the Style of Abraham Lincoln: microsoft word's unconscionable pricing invites competition. *** original: the library’s quiet atmosphere encourages visitors to [blank] in their work. Translated into the Style of Abraham Lincoln: the library’s quiet atmosphere encourages visitors to immerse themselves in their work. ``` Backwards ``` Essay Intro (National Parks): text: tourists are at ease in the national parks, ( swept up in the beauty of their natural splendor ). *** Essay Intro (D.C. Statehood): washington, d.c. is a city of outsize significance, ( ground zero for the nation's political life / center stage for the nation's political machinations ). ``` ``` topic: the Golden State Warriors. characterization 1: the reigning kings of the NBA. characterization 2: possessed of a remarkable cohesion. characterization 3: helmed by superstar Stephen Curry. characterization 4: perched atop the league’s hierarchy. characterization 5: boasting a litany of hall-of-famers. *** topic: emojis. characterization 1: shorthand for a digital generation. characterization 2: more versatile than words. characterization 3: the latest frontier in language. characterization 4: a form of self-expression. characterization 5: quintessentially millennial. characterization 6: reflective of a tech-centric world. *** topic: ``` ``` regular: illinois went against the census' population-loss prediction by getting more residents. VBG: defying the census' prediction of population loss, illinois experienced growth. *** regular: microsoft word’s high pricing increases the likelihood of competition. VBG: extortionately priced, microsoft word is inviting competition. *** regular: ``` ``` source: badminton should be more popular in the US. QUERY: Based on the given topic, can you develop a story outline? target: (1) games played with racquets are popular, (2) just look at tennis and ping pong, (3) but badminton underappreciated, (4) fun, fast-paced, competitive, (5) needs to be marketed more text: the sporting arena is dominated by games that are played with racquets. tennis and ping pong, in particular, are immensely popular. somewhat curiously, however, badminton is absent from this pantheon. exciting, fast-paced, and competitive, it is an underappreciated pastime. all that it lacks is more effective marketing. *** source: movies in theaters should be free. QUERY: Based on the given topic, can you develop a story outline? target: (1) movies provide vital life lessons, (2) many venues charge admission, (3) those without much money text: the lessons that movies impart are far from trivial. the vast catalogue of cinematic classics is replete with inspiring sagas of friendship, bravery, and tenacity. it is regrettable, then, that admission to theaters is not free. in their current form, the doors of this most vital of institutions are closed to those who lack the means to pay. *** source: ``` ``` in the private sector, { transparency } is vital to the business’s credibility. the { disclosure of information } can be the difference between success and failure. *** the labor market is changing, with { remote work } now the norm. this { flexible employment } allows the individual to design their own schedule. *** the { cubicle } is the locus of countless grievances. many complain that the { enclosed workspace } restricts their freedom of movement. *** ``` ``` it would be natural to assume that americans, as a people whose ancestors { immigrated to this country }, would be sympathetic to those seeking to do likewise. question: what does “do likewise” mean in the above context? (a) make the same journey (b) share in the promise of the american dream (c) start anew in the land of opportunity (d) make landfall on the united states *** in the private sector, { transparency } is vital to the business’s credibility. this orientation can be the difference between success and failure. question: what does “this orientation” mean in the above context? (a) visible business practices (b) candor with the public (c) open, honest communication (d) culture of accountability ``` ``` example: suppose you are a teacher. further suppose you want to tell an accurate telling of history. then suppose a parent takes offense. they do so in the name of name of their kid. this happens a lot. text: educators' responsibility to remain true to the historical record often clashes with the parent's desire to shelter their child from uncomfortable realities. *** example: suppose you are a student at college. now suppose you have to buy textbooks. that is going to be worth hundreds of dollars. given how much you already spend on tuition, that is going to hard cost to bear. text: the exorbitant cost of textbooks, which often reaches hundreds of dollars, imposes a sizable financial burden on the already-strapped college student. ``` ``` <Prefix> the atlanta hawks may attribute <Prefix> <Suffix> trae young <Suffix> <Middle> their robust season to <Middle> *** <Prefix> the nobel prize in literature <Prefix> <Suffix> honor <Suffix> <Middle> is a singularly prestigious <Middle> ``` ``` accustomed to having its name uttered ______, harvard university is weathering a rare spell of reputational tumult (a) in reverential tones (b) with great affection (c) in adulatory fashion (d) in glowing terms ``` ``` clarify: international ( {working together} / cooperation ) is called for when ( {issue go beyond lots of borders} / an issue transcends borders / a given matter has transnational implications ). ``` ``` description: when someone thinks that their view is the only right one. synonyms: intolerant, opinionated, narrow-minded, insular, self-righteous. *** description: when you put something off. synonyms: shelve, defer, table, postpone. ``` ``` organic sentence: crowdfunding is about winner of best ideas and it can test an entrepreneur’s idea. rewrite phrases: meritocratic, viability, vision rewritten with phrases: the meritocratic nature of crowdfunding empowers entrepreneurs to test their vision's viability. ``` *Note* Of all the masking techniques, this one works the best. ``` <Prefix> the atlanta hawks may attribute <Prefix> <Suffix> trae young <Suffix> <Middle> their robust season to <Middle> *** <Prefix> the nobel prize in literature <Prefix> <Suffix> honor <Suffix> <Middle> is a singularly prestigious <Middle> ``` ``` essence: when someone's views are keeping within reasonable. refine: the senator's voting record is ( moderate / centrist / pragmatic / balanced / fair-minded / even-handed ). *** essence: when things are worked through in a petty way. refine: the propensity of the u.s. congress to settle every dispute by way of ( mudslinging / bickering / demagoguery / name-calling / finger-pointing / vilification ) is appalling. ``` ``` description: when someone thinks that their view is the only right one. synonyms: intolerant, opinionated, narrow-minded, insular, self-righteous. *** description: when you put something off. synonyms: shelve, defer, table, postpone. ``` ``` organic sentence: crowdfunding is about winner of best ideas and it can test an entrepreneur’s idea. rewrite phrases: meritocratic, viability, vision rewritten with phrases: the meritocratic nature of crowdfunding empowers entrepreneurs to test their vision's viability. ``` ``` music before bedtime [makes for being able to relax] -> is a recipe for relaxation. ``` ``` [people wanting entertainment love traveling new york city] -> travelers flock to new york city in droves, drawn to its iconic entertainment scene. [cannot blame them] -> one cannot fault them [broadway so fun] -> when it is home to such thrilling fare as Broadway. ``` ``` in their ( ‖ when you are rushing because you want to get there on time ‖ / haste to arrive punctually / mad dash to be timely ), morning commuters are too rushed to whip up their own meal. *** politicians prefer to author vague plans rather than ( ‖ when you can make a plan without many unknowns ‖ / actionable policies / concrete solutions ). ``` ``` Q: What is whistleblower protection? A: Whistleblower protection is a form of legal immunity granted to employees who expose the unethical practices of their employer. Q: Why are whistleblower protections important? A: Absent whistleblower protections, employees would be deterred from exposing their employer’s wrongdoing for fear of retribution. Q: Why would an employer engage in retribution? A: An employer who has acted unethically stands to suffer severe financial and reputational damage were their transgressions to become public. To safeguard themselves from these consequences, they might seek to dissuade employees from exposing their wrongdoing. ``` ``` original: the meritocratic nature of crowdfunding [MASK] into their vision's viability. infill: the meritocratic nature of crowdfunding [gives investors idea of how successful] -> ( offers entrepreneurs a window ) into their vision's viability. ``` ``` Leadership | Lecture 17: Worker Morale What Workers Look for in Companies: • Benefits o Tuition reimbursement o Paid parental leave o 401K matching o Profit sharing o Pension plans o Free meals • Social responsibility o Environmental stewardship o Charitable contributions o Diversity • Work-life balance o Telecommuting o Paid holidays and vacation o Casual dress • Growth opportunities • Job security • Competitive compensation • Recognition o Open-door policies o Whistleblower protection o Employee-of-the-month awards o Positive performance reviews o Bonuses ```