modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-03 18:30:32
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
537 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-03 18:30:19
card
stringlengths
11
1.01M
Jeppo/Llama-2-13B-chat
Jeppo
2023-09-02T16:01:22Z
8
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "facebook", "meta", "llama-2", "en", "arxiv:2307.09288", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-08-25T08:25:25Z
--- extra_gated_heading: Access Llama 2 on Hugging Face extra_gated_description: >- This is a form to enable access to Llama 2 on Hugging Face after you have been granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our license terms and acceptable use policy before submitting this form. Requests will be processed in 1-2 days. extra_gated_prompt: "**Your Hugging Face account email address MUST match the email you provide on the Meta website, or your request will not be approved.**" extra_gated_button_content: Submit extra_gated_fields: I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox language: - en pipeline_tag: text-generation inference: false tags: - facebook - meta - pytorch - llama - llama-2 --- # **Llama 2** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212). **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software “bug,” or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)| |70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
The-matt/autumn-shadow-48_130
The-matt
2023-09-02T15:55:52Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T15:55:47Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
btamm12/roberta-base-finetuned-wls-manual-10ep
btamm12
2023-09-02T15:52:47Z
117
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-02T15:50:16Z
--- license: mit base_model: roberta-base tags: - generated_from_trainer model-index: - name: roberta-base-finetuned-wls-manual-10ep results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-wls-manual-10ep This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0599 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8201 | 0.93 | 7 | 1.5286 | | 1.4462 | 2.0 | 15 | 1.3480 | | 1.3032 | 2.93 | 22 | 1.3377 | | 1.2564 | 4.0 | 30 | 1.1907 | | 1.246 | 4.93 | 37 | 1.1702 | | 1.1777 | 6.0 | 45 | 1.1549 | | 1.118 | 6.93 | 52 | 1.0611 | | 1.1339 | 8.0 | 60 | 1.1084 | | 1.1158 | 8.93 | 67 | 1.1376 | | 1.0143 | 9.33 | 70 | 1.1225 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.11.0+cu113 - Datasets 2.14.4 - Tokenizers 0.13.3
btamm12/bert-base-uncased-finetuned-wls-manual-10ep-lower
btamm12
2023-09-02T15:50:08Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-02T15:47:54Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: bert-base-uncased-finetuned-wls-manual-10ep-lower results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-wls-manual-10ep-lower This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4076 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1089 | 0.93 | 7 | 1.9417 | | 1.5952 | 2.0 | 15 | 1.5688 | | 1.4717 | 2.93 | 22 | 1.4364 | | 1.3673 | 4.0 | 30 | 1.4096 | | 1.2666 | 4.93 | 37 | 1.2430 | | 1.2398 | 6.0 | 45 | 1.2435 | | 1.2056 | 6.93 | 52 | 1.2533 | | 1.1372 | 8.0 | 60 | 1.3034 | | 1.1384 | 8.93 | 67 | 1.2087 | | 1.1148 | 9.33 | 70 | 1.2141 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.11.0+cu113 - Datasets 2.14.4 - Tokenizers 0.13.3
norman365/atom-Llama2-chinese-7b-ggml.bin
norman365
2023-09-02T15:47:03Z
0
0
null
[ "zh", "license:apache-2.0", "region:us" ]
null
2023-09-02T15:46:12Z
--- license: apache-2.0 language: - zh ---
kaneki1933/testes
kaneki1933
2023-09-02T15:44:09Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-06-20T17:55:55Z
--- license: creativeml-openrail-m ---
btamm12/bert-base-uncased-finetuned-wls-manual-9ep-lower
btamm12
2023-09-02T15:42:56Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-02T15:40:41Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: bert-base-uncased-finetuned-wls-manual-9ep-lower results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-wls-manual-9ep-lower This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 9 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1096 | 0.93 | 7 | 1.9445 | | 1.5963 | 2.0 | 15 | 1.5711 | | 1.4734 | 2.93 | 22 | 1.4391 | | 1.3716 | 4.0 | 30 | 1.4138 | | 1.2719 | 4.93 | 37 | 1.2480 | | 1.2486 | 6.0 | 45 | 1.2483 | | 1.2156 | 6.93 | 52 | 1.2662 | | 1.1523 | 8.0 | 60 | 1.3172 | | 1.1596 | 8.4 | 63 | 1.2467 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.11.0+cu113 - Datasets 2.14.4 - Tokenizers 0.13.3
rajaswa-postman/es_chat_lora
rajaswa-postman
2023-09-02T15:39:41Z
1
0
peft
[ "peft", "region:us" ]
null
2023-09-02T15:22:10Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0 - PEFT 0.5.0
btamm12/roberta-base-finetuned-wls-manual-8ep
btamm12
2023-09-02T15:38:16Z
115
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-02T15:35:48Z
--- license: mit base_model: roberta-base tags: - generated_from_trainer model-index: - name: roberta-base-finetuned-wls-manual-8ep results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-wls-manual-8ep This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1496 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8186 | 0.93 | 7 | 1.5245 | | 1.4337 | 2.0 | 15 | 1.3340 | | 1.2959 | 2.93 | 22 | 1.3375 | | 1.2682 | 4.0 | 30 | 1.1892 | | 1.2558 | 4.93 | 37 | 1.1743 | | 1.1828 | 6.0 | 45 | 1.1438 | | 1.138 | 6.93 | 52 | 1.0716 | | 1.1495 | 7.47 | 56 | 1.1702 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.11.0+cu113 - Datasets 2.14.4 - Tokenizers 0.13.3
haddadalwi/bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-squad2-noAns
haddadalwi
2023-09-02T15:36:53Z
117
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad_v2", "base_model:google-bert/bert-large-uncased-whole-word-masking-finetuned-squad", "base_model:finetune:google-bert/bert-large-uncased-whole-word-masking-finetuned-squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-09-01T16:30:38Z
--- license: apache-2.0 base_model: bert-large-uncased-whole-word-masking-finetuned-squad tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-squad2-noAns results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-squad2-noAns This model is a fine-tuned version of [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 266 | 0.0000 | | 0.0649 | 2.0 | 532 | 0.0000 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
KingKazma/xsum_t5-small_lora_500_10_50000_8_e2_s6789_v4_l4_r4
KingKazma
2023-09-02T15:36:43Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T15:36:42Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.6.0.dev0
TLME/western-classification
TLME
2023-09-02T15:28:54Z
0
0
null
[ "image-classification", "license:mit", "region:us" ]
image-classification
2023-08-07T17:43:47Z
--- license: mit pipeline_tag: image-classification --- A classification using mmpretrain trained to classify western images based on ConvNeXtV2-tiny.Used for classifying anime images based on whether they are in the Western style. The evaluation accuracy on the validation set is 95%. Trained using 7,000 Western images and 8,000 non-Western images, with the Western training set sampled from e-hentai. Of course, this model also has many shortcomings, such as a very low recognition accuracy for line-drawing images. Huggingface space:https://huggingface.co/spaces/TLME/western-anime-images-classification # How to use Python>=3.9 ``` Install pytorch pip install -r requirements.txt edit infer.py , change "path = './testimg/'" to your target folder python infer.py ```
btamm12/bert-base-uncased-finetuned-wls-manual-7ep-lower
btamm12
2023-09-02T15:28:50Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-02T15:26:48Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: bert-base-uncased-finetuned-wls-manual-7ep-lower results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-wls-manual-7ep-lower This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3490 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1113 | 0.93 | 7 | 1.9498 | | 1.6005 | 2.0 | 15 | 1.5784 | | 1.4812 | 2.93 | 22 | 1.4474 | | 1.3854 | 4.0 | 30 | 1.4290 | | 1.2898 | 4.93 | 37 | 1.2682 | | 1.2785 | 6.0 | 45 | 1.2677 | | 1.2535 | 6.53 | 49 | 1.3363 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.11.0+cu113 - Datasets 2.14.4 - Tokenizers 0.13.3
ishan-07/full-finetuned-eurosat
ishan-07
2023-09-02T15:28:46Z
191
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-09-02T14:47:17Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: full-finetuned-eurosat results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # full-finetuned-eurosat This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1905 - Accuracy: 0.9817 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4156 | 1.0 | 168 | 0.3044 | 0.9722 | | 0.2658 | 2.0 | 337 | 0.1905 | 0.9817 | | 0.2483 | 2.99 | 504 | 0.1670 | 0.9813 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
The-matt/autumn-shadow-48_90
The-matt
2023-09-02T15:27:43Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T15:27:39Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
btamm12/bert-base-cased-finetuned-wls-manual-7ep
btamm12
2023-09-02T15:26:41Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-02T15:24:40Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer model-index: - name: bert-base-cased-finetuned-wls-manual-7ep results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-finetuned-wls-manual-7ep This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2757 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1707 | 0.93 | 7 | 1.9153 | | 1.658 | 2.0 | 15 | 1.6462 | | 1.5689 | 2.93 | 22 | 1.5263 | | 1.4013 | 4.0 | 30 | 1.4385 | | 1.3501 | 4.93 | 37 | 1.4224 | | 1.293 | 6.0 | 45 | 1.3189 | | 1.2473 | 6.53 | 49 | 1.2231 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.11.0+cu113 - Datasets 2.14.4 - Tokenizers 0.13.3
Satorio/so-vits-4.1-Nice_Nature
Satorio
2023-09-02T15:22:42Z
0
0
null
[ "license:cc-by-nc-4.0", "region:us" ]
null
2023-08-06T13:14:51Z
--- license: cc-by-nc-4.0 --- Model: Nice Nature(Umamusume: Pretty Derby) Dataset Source: DMM Umamusume Game Still training to improve model... Maybe better, maybe not...
The-matt/autumn-shadow-48_80
The-matt
2023-09-02T15:21:01Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T15:20:51Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
crewdon/AICategoryMapping-multilingual-e5-small
crewdon
2023-09-02T15:20:57Z
14
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-09-02T15:05:10Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # AICategoryMapping-multilingual-e5-small This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 94 with parameters: ``` {'batch_size': 400} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 40, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 376, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
btamm12/bert-base-cased-finetuned-wls-manual-6ep
btamm12
2023-09-02T15:18:21Z
115
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-02T15:16:23Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer model-index: - name: bert-base-cased-finetuned-wls-manual-6ep results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-finetuned-wls-manual-6ep This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2526 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1598 | 0.93 | 7 | 1.8481 | | 1.6257 | 2.0 | 15 | 1.6306 | | 1.5537 | 2.93 | 22 | 1.5150 | | 1.3943 | 4.0 | 30 | 1.4392 | | 1.355 | 4.93 | 37 | 1.4389 | | 1.3098 | 5.6 | 42 | 1.3518 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.11.0+cu113 - Datasets 2.14.4 - Tokenizers 0.13.3
btamm12/roberta-base-finetuned-wls-manual-5ep
btamm12
2023-09-02T15:16:16Z
125
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-02T15:14:07Z
--- license: mit base_model: roberta-base tags: - generated_from_trainer model-index: - name: roberta-base-finetuned-wls-manual-5ep results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-wls-manual-5ep This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1889 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8234 | 0.93 | 7 | 1.5153 | | 1.4411 | 2.0 | 15 | 1.3464 | | 1.2972 | 2.93 | 22 | 1.3354 | | 1.2674 | 4.0 | 30 | 1.2134 | | 1.2753 | 4.67 | 35 | 1.3446 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.11.0+cu113 - Datasets 2.14.4 - Tokenizers 0.13.3
btamm12/bert-base-uncased-finetuned-wls-manual-5ep-lower
btamm12
2023-09-02T15:14:00Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-02T15:12:03Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: bert-base-uncased-finetuned-wls-manual-5ep-lower results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-wls-manual-5ep-lower This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4858 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1142 | 0.93 | 7 | 1.9585 | | 1.6082 | 2.0 | 15 | 1.5910 | | 1.4973 | 2.93 | 22 | 1.4644 | | 1.4145 | 4.0 | 30 | 1.4717 | | 1.335 | 4.67 | 35 | 1.4035 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.11.0+cu113 - Datasets 2.14.4 - Tokenizers 0.13.3
KingKazma/xsum_t5-small_lora_500_10_50000_8_e1_s6789_v4_l4_r4
KingKazma
2023-09-02T15:09:11Z
1
0
peft
[ "peft", "region:us" ]
null
2023-09-02T15:09:10Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.6.0.dev0
btamm12/bert-base-uncased-finetuned-wls-manual-4ep-lower
btamm12
2023-09-02T15:07:01Z
116
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-02T15:04:34Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: bert-base-uncased-finetuned-wls-manual-4ep-lower results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-wls-manual-4ep-lower This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5279 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1174 | 0.93 | 7 | 1.9683 | | 1.617 | 2.0 | 15 | 1.6046 | | 1.5138 | 2.93 | 22 | 1.4859 | | 1.4474 | 3.73 | 28 | 1.4356 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.11.0+cu113 - Datasets 2.14.4 - Tokenizers 0.13.3
The-matt/autumn-shadow-48_60
The-matt
2023-09-02T15:06:47Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T15:06:44Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
NiscR/a2c-PandaReachDense-v3
NiscR
2023-09-02T15:06:45Z
1
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-09-02T15:01:15Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.22 +/- 0.11 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
btamm12/roberta-base-finetuned-wls-manual-3ep
btamm12
2023-09-02T15:01:54Z
129
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-02T14:59:09Z
--- license: mit base_model: roberta-base tags: - generated_from_trainer model-index: - name: roberta-base-finetuned-wls-manual-3ep results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-wls-manual-3ep This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3361 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8156 | 0.93 | 7 | 1.5116 | | 1.4371 | 2.0 | 15 | 1.3472 | | 1.3218 | 2.8 | 21 | 1.3278 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.11.0+cu113 - Datasets 2.14.4 - Tokenizers 0.13.3
dhinman/poca-SoccerTwos
dhinman
2023-09-02T15:00:49Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-09-02T14:59:42Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: dhinman/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
yaohuacn/a2c-PandaPickAndPlace-v3
yaohuacn
2023-09-02T15:00:35Z
3
0
stable-baselines3
[ "stable-baselines3", "PandaPickAndPlace-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-09-02T14:45:56Z
--- library_name: stable-baselines3 tags: - PandaPickAndPlace-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaPickAndPlace-v3 type: PandaPickAndPlace-v3 metrics: - type: mean_reward value: -50.00 +/- 0.00 name: mean_reward verified: false --- # **A2C** Agent playing **PandaPickAndPlace-v3** This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
tsukemono/japanese-stablelm-base-alpha-7b-qlora-marisa
tsukemono
2023-09-02T14:58:35Z
0
0
null
[ "ja", "region:us" ]
null
2023-08-28T08:24:30Z
--- language: - ja --- ## モデルの概略 霧雨魔理沙とおしゃべりできるモデルです。 [Japanese-StableLM-Base-Alpha-7B](https://huggingface.co/stabilityai/japanese-stablelm-base-alpha-7b)のLoRAデータになります ## 使い方 推論のさせかたの一例をhow_to_use.ipynbに記しましたので参考にしていただけると幸いです。 「ユーザー: hogehoge\n魔理沙: 」といったプロンプトを与えてあげることで、魔理沙とおしゃべりができるようになります。 ## 備考 これは東方Projectの二次創作です --- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0
nightdude/config_821
nightdude
2023-09-02T14:53:38Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T14:52:34Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
btamm12/bert-base-uncased-finetuned-wls-manual-2ep-lower
btamm12
2023-09-02T14:51:03Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-02T14:48:39Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: bert-base-uncased-finetuned-wls-manual-2ep-lower results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-wls-manual-2ep-lower This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.7614 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1678 | 0.93 | 7 | 2.0527 | | 1.6854 | 1.87 | 14 | 1.7688 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.11.0+cu113 - Datasets 2.14.4 - Tokenizers 0.13.3
Therence-NG/Decoder-1b
Therence-NG
2023-09-02T14:49:19Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T14:49:17Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.0.dev0
DrYond3r/OrelsanV1
DrYond3r
2023-09-02T14:44:10Z
0
0
null
[ "arxiv:1910.09700", "license:openrail", "region:us" ]
null
2023-08-30T07:07:50Z
--- license: openrail --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
btamm12/bert-base-uncased-finetuned-wls-manual-1ep-lower
btamm12
2023-09-02T14:44:00Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-02T14:42:17Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: bert-base-uncased-finetuned-wls-manual-1ep-lower results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-wls-manual-1ep-lower This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8872 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1338 | 0.93 | 7 | 2.0952 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.11.0+cu113 - Datasets 2.14.4 - Tokenizers 0.13.3
btamm12/bert-base-cased-finetuned-wls-manual-1ep
btamm12
2023-09-02T14:42:09Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-02T14:40:23Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer model-index: - name: bert-base-cased-finetuned-wls-manual-1ep results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-finetuned-wls-manual-1ep This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8675 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1332 | 0.93 | 7 | 1.9236 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.11.0+cu113 - Datasets 2.14.4 - Tokenizers 0.13.3
Campqt/ppo-LunarLander-v2-unit8
Campqt
2023-09-02T14:39:07Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-09-02T14:24:15Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -78.14 +/- 80.44 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 500000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'Campqt/ppo-LunarLander-v2-unit8' 'batch_size': 512 'minibatch_size': 128} ```
Lenouche/RauruTOTK
Lenouche
2023-09-02T14:38:39Z
0
0
null
[ "fr", "license:openrail", "region:us" ]
null
2023-08-21T21:56:01Z
--- language: - fr type de modèle: - voix epochs: - 300 version de modèle: - RVC.v2 license: openrail ---
plaguss/dialogpt_dwight2
plaguss
2023-09-02T14:38:09Z
1
0
peft
[ "peft", "region:us" ]
null
2023-08-30T17:34:12Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0
rrozb/Reinforce-1
rrozb
2023-09-02T14:36:41Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-09-02T14:36:31Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
BadreddineHug/LayoutLMv3_97_1
BadreddineHug
2023-09-02T14:34:25Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "layoutlmv3", "token-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-08-31T16:04:53Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: LayoutLMv3_97_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # LayoutLMv3_97_1 This model is a fine-tuned version of [microsoft/layoutlmv3-large](https://huggingface.co/microsoft/layoutlmv3-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8446 - Precision: 0.5939 - Recall: 0.8376 - F1: 0.6950 - Accuracy: 0.8952 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 2000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 2.44 | 100 | 0.4463 | 0.4830 | 0.7265 | 0.5802 | 0.8599 | | No log | 4.88 | 200 | 0.4064 | 0.5924 | 0.7949 | 0.6788 | 0.8884 | | No log | 7.32 | 300 | 0.4774 | 0.5813 | 0.7949 | 0.6715 | 0.8907 | | No log | 9.76 | 400 | 0.5800 | 0.6013 | 0.7863 | 0.6815 | 0.8907 | | 0.2076 | 12.2 | 500 | 0.6426 | 0.6209 | 0.8120 | 0.7037 | 0.8952 | | 0.2076 | 14.63 | 600 | 0.6872 | 0.5939 | 0.8376 | 0.6950 | 0.8907 | | 0.2076 | 17.07 | 700 | 0.7801 | 0.5915 | 0.8291 | 0.6904 | 0.8918 | | 0.2076 | 19.51 | 800 | 0.7865 | 0.5890 | 0.8205 | 0.6857 | 0.8895 | | 0.2076 | 21.95 | 900 | 0.8533 | 0.5854 | 0.8205 | 0.6833 | 0.8895 | | 0.0109 | 24.39 | 1000 | 0.7738 | 0.5864 | 0.8120 | 0.6810 | 0.8941 | | 0.0109 | 26.83 | 1100 | 0.8297 | 0.5854 | 0.8205 | 0.6833 | 0.8872 | | 0.0109 | 29.27 | 1200 | 0.7690 | 0.6062 | 0.8291 | 0.7004 | 0.8975 | | 0.0109 | 31.71 | 1300 | 0.8629 | 0.5904 | 0.8376 | 0.6926 | 0.8895 | | 0.0109 | 34.15 | 1400 | 0.8104 | 0.5976 | 0.8376 | 0.6975 | 0.8941 | | 0.0027 | 36.59 | 1500 | 0.7864 | 0.5926 | 0.8205 | 0.6882 | 0.8929 | | 0.0027 | 39.02 | 1600 | 0.8002 | 0.6037 | 0.8462 | 0.7046 | 0.8986 | | 0.0027 | 41.46 | 1700 | 0.8049 | 0.5964 | 0.8462 | 0.6996 | 0.8964 | | 0.0027 | 43.9 | 1800 | 0.8355 | 0.5939 | 0.8376 | 0.6950 | 0.8952 | | 0.0027 | 46.34 | 1900 | 0.8402 | 0.5939 | 0.8376 | 0.6950 | 0.8952 | | 0.001 | 48.78 | 2000 | 0.8446 | 0.5939 | 0.8376 | 0.6950 | 0.8952 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
ckandemir/xlm-roberta-base-finetuned-panx-all
ckandemir
2023-09-02T14:31:52Z
103
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-09-02T13:34:37Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1723 - F1: 0.8549 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3018 | 1.0 | 835 | 0.1952 | 0.8121 | | 0.1575 | 2.0 | 1670 | 0.1776 | 0.8404 | | 0.1017 | 3.0 | 2505 | 0.1723 | 0.8549 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
The-matt/autumn-shadow-48_10
The-matt
2023-09-02T14:30:51Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T14:30:47Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
Lenouche/Sblerky
Lenouche
2023-09-02T14:30:42Z
0
0
null
[ "fr", "license:openrail", "region:us" ]
null
2023-08-13T23:01:35Z
--- license: openrail language: - fr ---
Lenouche/Conkerax
Lenouche
2023-09-02T14:30:03Z
0
0
null
[ "fr", "license:openrail", "region:us" ]
null
2023-08-13T22:13:05Z
--- license: openrail language : - fr ---
Lenouche/MrBidouille
Lenouche
2023-09-02T14:29:22Z
0
0
null
[ "fr", "license:openrail", "region:us" ]
null
2023-08-16T20:37:30Z
--- language: - fr license: openrail ---
Lenouche/GiaTechAndGaming
Lenouche
2023-09-02T14:28:46Z
0
0
null
[ "fr", "license:openrail", "region:us" ]
null
2023-08-17T01:44:54Z
--- language: - fr license: openrail ---
Lenouche/SebDuGrenier
Lenouche
2023-09-02T14:28:23Z
0
0
null
[ "fr", "license:openrail", "region:us" ]
null
2023-08-17T15:16:22Z
--- language: - fr type de modèle: - voix epochs: - 300 version de modèle: - RVC.v2 license: openrail ---
Zevin2023/MoC-IQA
Zevin2023
2023-09-02T14:28:05Z
0
0
null
[ "aa", "license:openrail", "region:us" ]
null
2023-09-02T14:02:17Z
--- license: openrail language: - aa metrics: - accuracy ---
Lenouche/TevIciJapon
Lenouche
2023-09-02T14:27:59Z
0
0
null
[ "fr", "license:openrail", "region:us" ]
null
2023-08-17T18:47:02Z
--- language: - fr license: openrail ---
Lenouche/ReneMalleville
Lenouche
2023-09-02T14:27:12Z
0
0
null
[ "fr", "license:openrail", "region:us" ]
null
2023-08-29T16:00:07Z
--- language: - fr license: openrail ---
Lenouche/LouisSan
Lenouche
2023-09-02T14:27:01Z
0
0
null
[ "fr", "license:openrail", "region:us" ]
null
2023-08-27T00:10:33Z
--- language: - fr license: openrail ---
Lenouche/DefendIntelligence
Lenouche
2023-09-02T14:26:44Z
0
0
null
[ "fr", "license:openrail", "region:us" ]
null
2023-08-31T00:44:45Z
--- language: - fr license: openrail ---
SymeCloud/Llama2-7b-Chat-GGUF
SymeCloud
2023-09-02T14:25:41Z
1
2
transformers
[ "transformers", "llama", "code", "llama-2", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-09-02T11:59:57Z
--- license: apache-2.0 language: - en tags: - code - llama-2 --- # Llama2 Chat 7B - GGUF - Model creator: [Meta](https://huggingface.co/meta-llama) - Original model: [Llama 2 7b Chat GGML](https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML) <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates. * [llama.cpp](https://github.com/ggerganov/llama.cpp)
Kamer/DuplicatesUnique
Kamer
2023-09-02T14:24:10Z
109
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-02T13:36:09Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: DuplicatesUnique results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DuplicatesUnique This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 1.7513 - eval_Accuracy: 0.3885 - eval_F1_macro: 0.1389 - eval_F1_class_0: 0.8712 - eval_F1_class_1: 0.6667 - eval_F1_class_2: 0.2133 - eval_F1_class_3: 0.0 - eval_F1_class_4: 0.0 - eval_F1_class_5: 0.0 - eval_F1_class_6: 0.0187 - eval_F1_class_7: 0.0 - eval_F1_class_8: 0.0 - eval_F1_class_9: 0.8726 - eval_F1_class_10: 0.0147 - eval_F1_class_11: 0.0 - eval_F1_class_12: 0.1204 - eval_F1_class_13: 0.0 - eval_F1_class_14: 0.0 - eval_F1_class_15: 0.0 - eval_F1_class_16: 0.0 - eval_F1_class_17: 0.0 - eval_F1_class_18: 0.0 - eval_F1_class_19: 0.0 - eval_runtime: 16.4781 - eval_samples_per_second: 68.576 - eval_steps_per_second: 8.618 - epoch: 0.77 - step: 5000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.32.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.13.3
CzarnyRycerz/ppo-Huggy
CzarnyRycerz
2023-09-02T14:16:53Z
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-09-02T14:16:42Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: CzarnyRycerz/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
hmxiong/epcl_vit_l
hmxiong
2023-09-02T14:09:30Z
0
0
null
[ "arxiv:1910.09700", "region:us" ]
null
2023-09-02T08:16:22Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [OpenLAMM] - **Model type:** [Pytorch] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [FrozenCLIP] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> ScanNet [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Avenuenw/prompt-extender
Avenuenw
2023-09-02T13:58:26Z
111
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-02T13:52:41Z
--- license: mit tags: - generated_from_trainer model-index: - name: prompt-extend results: [] --- [![Generic badge](https://img.shields.io/badge/🤗-Open%20in%20Spaces-blue.svg)](https://huggingface.co/spaces/daspartho/prompt-extend) # Prompt Extend Text generation model for generating suitable style cues given the main idea for a prompt. It is a GPT-2 model trained on [dataset](https://huggingface.co/datasets/daspartho/stable-diffusion-prompts) of stable diffusion prompts. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 128 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.7436 | 1.0 | 12796 | 2.5429 | | 2.3292 | 2.0 | 25592 | 2.0711 | | 1.9439 | 3.0 | 38388 | 1.8447 | | 1.7059 | 4.0 | 51184 | 1.7325 | | 1.5775 | 5.0 | 63980 | 1.7110 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2
NiscR/Pyramids-1
NiscR
2023-09-02T13:53:42Z
5
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-09-02T13:53:36Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: NiscR/Pyramids-1 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
VinayHajare/ppo-LunarLander-v2
VinayHajare
2023-09-02T13:51:21Z
5
3
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-09-02T06:37:42Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 263.26 +/- 19.25 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) ```python # !pip gymnasium huggingface-sb3 stable_baselines3[extra] import gymnasium as gym from huggingface_sb3 import load_from_hub from stable_baselines3 import PPO from stable_baselines3.common.vec_env import DummyVecEnv from stable_baselines3.common.evaluation import evaluate_policy from stable_baselines3.common.monitor import Monitor repo_id = "VinayHajare/ppo-LunarLander-v2" filename = "ppo-LunarLander-v2.zip" eval_env = gym.make("LunarLander-v2", render_mode="human") checkpoint = load_from_hub(repo_id, filename) model = PPO.load(checkpoint,print_system_info=True) mean_reward, std_reward = evaluate_policy(model,eval_env, n_eval_episodes=10, deterministic=True) print(f"mean_reward={mean_reward:.2f} +/- {std_reward}") # Enjoy trained agent observation, info = eval_env.reset() for _ in range(1000): action, _states = model.predict(observation, deterministic=True) observation, rewards, terminated, truncated, info = eval_env.step(action) eval_env.render() ```
venetis/roberta-base-finetuned-3d-sentiment
venetis
2023-09-02T13:41:28Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-01T07:49:37Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: roberta-base-finetuned-3d-sentiment results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-3d-sentiment This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5817 - Accuracy: 0.7753 - Precision: 0.7757 - Recall: 0.7753 - F1: 0.7745 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 6381 - num_epochs: 7 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.7758 | 1.0 | 1595 | 0.7691 | 0.7069 | 0.7256 | 0.7069 | 0.7052 | | 0.5496 | 2.0 | 3190 | 0.6961 | 0.7255 | 0.7441 | 0.7255 | 0.7252 | | 0.4856 | 3.0 | 4785 | 0.6451 | 0.7368 | 0.7562 | 0.7368 | 0.7328 | | 0.4257 | 4.0 | 6380 | 0.5817 | 0.7753 | 0.7757 | 0.7753 | 0.7745 | | 0.351 | 5.0 | 7975 | 0.6637 | 0.7633 | 0.7717 | 0.7633 | 0.7637 | | 0.2551 | 6.0 | 9570 | 0.7646 | 0.7696 | 0.7738 | 0.7696 | 0.7699 | | 0.1845 | 7.0 | 11165 | 0.8529 | 0.7674 | 0.7730 | 0.7674 | 0.7680 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu117 - Datasets 2.10.1 - Tokenizers 0.13.3
pritam3355/llama2-qlora-finetunined-french
pritam3355
2023-09-02T13:34:55Z
2
0
peft
[ "peft", "region:us" ]
null
2023-09-02T13:30:27Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.0.dev0
Kamer/NoDuplicates
Kamer
2023-09-02T13:27:46Z
105
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-01T16:09:33Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: NoDuplicates results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NoDuplicates This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4279 - Accuracy: 0.9128 - F1 Macro: 0.8384 - F1 Class 0: 0.9406 - F1 Class 1: 0.3333 - F1 Class 2: 0.9127 - F1 Class 3: 0.6471 - F1 Class 4: 0.8254 - F1 Class 5: 0.8293 - F1 Class 6: 0.8767 - F1 Class 7: 0.7606 - F1 Class 8: 0.7500 - F1 Class 9: 0.9878 - F1 Class 10: 0.9444 - F1 Class 11: 0.9630 - F1 Class 12: 0.9265 - F1 Class 13: 0.8980 - F1 Class 14: 0.8444 - F1 Class 15: 0.8132 - F1 Class 16: 0.7778 - F1 Class 17: 0.9651 - F1 Class 18: 0.9574 - F1 Class 19: 0.8148 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | F1 Class 0 | F1 Class 1 | F1 Class 2 | F1 Class 3 | F1 Class 4 | F1 Class 5 | F1 Class 6 | F1 Class 7 | F1 Class 8 | F1 Class 9 | F1 Class 10 | F1 Class 11 | F1 Class 12 | F1 Class 13 | F1 Class 14 | F1 Class 15 | F1 Class 16 | F1 Class 17 | F1 Class 18 | F1 Class 19 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:| | 1.4862 | 0.27 | 300 | 0.8201 | 0.7845 | 0.4484 | 0.8675 | 0.0 | 0.8627 | 0.0 | 0.6733 | 0.0 | 0.6627 | 0.0 | 0.0 | 0.9862 | 0.1935 | 0.9600 | 0.8299 | 0.0833 | 0.2353 | 0.24 | 0.0400 | 0.8852 | 0.9451 | 0.5033 | | 0.7269 | 0.53 | 600 | 0.5951 | 0.8491 | 0.6504 | 0.9048 | 0.0 | 0.8567 | 0.0 | 0.7596 | 0.6111 | 0.6887 | 0.0 | 0.0 | 0.9877 | 0.8033 | 0.9286 | 0.8798 | 0.9167 | 0.74 | 0.6857 | 0.5823 | 0.9506 | 0.9485 | 0.7640 | | 0.5429 | 0.8 | 900 | 0.5375 | 0.8637 | 0.7086 | 0.8904 | 0.0 | 0.8589 | 0.0 | 0.7254 | 0.7805 | 0.8215 | 0.6769 | 0.0 | 0.9877 | 0.7833 | 1.0 | 0.9022 | 0.9130 | 0.7912 | 0.7733 | 0.7048 | 0.9032 | 0.9474 | 0.7119 | | 0.4594 | 1.06 | 1200 | 0.5110 | 0.8805 | 0.7113 | 0.9099 | 0.0 | 0.8925 | 0.0 | 0.7706 | 0.7391 | 0.8139 | 0.4091 | 0.0 | 0.9908 | 0.8785 | 1.0 | 0.8983 | 0.8936 | 0.8090 | 0.7556 | 0.7907 | 0.9529 | 0.9574 | 0.7647 | | 0.3484 | 1.33 | 1500 | 0.4679 | 0.8951 | 0.7667 | 0.9180 | 0.0 | 0.9080 | 0.6957 | 0.8 | 0.7619 | 0.8299 | 0.6875 | 0.0 | 0.9908 | 0.8909 | 1.0 | 0.9196 | 0.9130 | 0.8172 | 0.7865 | 0.7527 | 0.9398 | 0.9474 | 0.7755 | | 0.3744 | 1.59 | 1800 | 0.4359 | 0.8951 | 0.7774 | 0.9290 | 0.0 | 0.8815 | 0.8462 | 0.8049 | 0.7805 | 0.8449 | 0.7059 | 0.0 | 0.9908 | 0.9346 | 1.0 | 0.9143 | 0.8980 | 0.8387 | 0.7475 | 0.7179 | 0.9647 | 0.9583 | 0.7895 | | 0.3514 | 1.86 | 2100 | 0.5161 | 0.8903 | 0.7592 | 0.9109 | 0.0 | 0.8973 | 0.6429 | 0.7603 | 0.7907 | 0.8571 | 0.7077 | 0.0 | 0.9908 | 0.9346 | 1.0 | 0.8971 | 0.8936 | 0.7042 | 0.7324 | 0.7857 | 0.9595 | 0.9574 | 0.7609 | | 0.3111 | 2.12 | 2400 | 0.4327 | 0.9080 | 0.8027 | 0.9283 | 0.3333 | 0.9141 | 0.7407 | 0.8207 | 0.8095 | 0.8622 | 0.7606 | 0.0 | 0.9908 | 0.9298 | 0.9630 | 0.9215 | 0.9167 | 0.8041 | 0.8 | 0.8132 | 0.9651 | 0.9574 | 0.8224 | | 0.2088 | 2.39 | 2700 | 0.4356 | 0.9128 | 0.8452 | 0.9386 | 0.3333 | 0.9058 | 0.8462 | 0.8265 | 0.8 | 0.8562 | 0.7429 | 0.7500 | 0.9893 | 0.9346 | 0.9630 | 0.9322 | 0.8936 | 0.8205 | 0.8372 | 0.7765 | 0.9651 | 0.9574 | 0.8350 | | 0.2317 | 2.65 | 3000 | 0.4294 | 0.9137 | 0.8217 | 0.9365 | 0.3333 | 0.9102 | 0.625 | 0.8243 | 0.8293 | 0.875 | 0.8056 | 0.3333 | 0.9893 | 0.9444 | 0.9630 | 0.9284 | 0.8980 | 0.8478 | 0.8471 | 0.7816 | 0.9651 | 0.9574 | 0.8400 | | 0.1816 | 2.92 | 3300 | 0.4279 | 0.9128 | 0.8384 | 0.9406 | 0.3333 | 0.9127 | 0.6471 | 0.8254 | 0.8293 | 0.8767 | 0.7606 | 0.7500 | 0.9878 | 0.9444 | 0.9630 | 0.9265 | 0.8980 | 0.8444 | 0.8132 | 0.7778 | 0.9651 | 0.9574 | 0.8148 | ### Framework versions - Transformers 4.32.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.13.3
jongalon/intel_image_classification_fastai
jongalon
2023-09-02T13:17:37Z
0
0
fastai
[ "fastai", "region:us" ]
null
2023-09-02T13:17:34Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
chillpixel/blacklight-makeup-sdxl-lora
chillpixel
2023-09-02T13:15:34Z
651
8
diffusers
[ "diffusers", "art", "style", "sdxl", "lora", "stable diffusion", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "blacklight", "makeup", "neon", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2023-08-26T22:37:20Z
--- library_name: diffusers pipeline_tag: text-to-image base_model: stabilityai/stable-diffusion-xl-base-1.0 tags: - art - style - sdxl - lora - stable diffusion - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - blacklight - makeup - neon inference: true --- # Blacklight Makeup — SDXL LoRA ![Blacklight Makeup — SDXL LoRA Example Images](blacklight-makeup-sdxl-lora.jpg) ## <span style="color: orange;">Blacklight Makeup</span> is a fun art style for SDXL **Difficulty**: <span style="color: indianred;">*Advanced*</span> (not for beginners) **Blacklight makeup** is a mesmerizing art style that I have come to enjoy for its *creativity* and *boldness*. The magic lies in its ability to transform a simple canvas, such as the human face and body, into a vibrant and otherworldly masterpiece under the enchanting glow of ultraviolet light. The way the colors pop and come to life creates an almost surreal experience for both the creator and the audience. It's like stepping into a dreamlike realm. I hope that Blacklight Makeup's radiant glow inspires you to experiment, to challenge norms, and to create beauty that transcends the ordinary! ### What's new in Version 2? I've retrained it with *improved captions and parameters*, which brings: - simpler trigger words: `blacklight makeup` - better output quality - reduced file size - improved compatibility with other LoRAs ### What's next? Enhancing the dataset while also experimenting with new training techniques. ### How to use: **Example prompt:** `Portrait of woman with blacklight makeup, fantasy, highly detailed, digital painting, artstation, concept art, sharp focus, illustration, art by Tony Sart and artgerm and randy vargas` - trigger words: `blacklight makeup` - **combine with other LoRAs for extra fun!** - `<lora:blacklight_makeup_v2:1>` - **2:3** — 832x1248 - **16:9** — 1360x768 - **1:1** — 1024x1024 #### HuggingFace🤗 Diffusers ```python from diffusers import StableDiffusionXLPipeline, EulerDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True, ) pipe.scheduler = EulerDiscreteScheduler.from_config( pipe.scheduler.config, use_karras_sigmas=True ) pipe.to("cuda") pipe.load_lora_weights( "chillpixel/blacklight-makeup-sdxl-lora", weight_name="blacklight_makeup_v2.safetensors", ) image = pipe( prompt="Portrait of woman with blacklight makeup, fantasy, highly detailed, digital painting, artstation, concept art, sharp focus, illustration, art by Tony Sart and artgerm and randy vargas", num_inference_steps=35, guidance_scale=6, width=832, height=1248, ).images[0] ``` #### Also, available at: - [LoRA the Explorer](https://huggingface.co/spaces/multimodalart/LoraTheExplorer) - [CivitAI](https://civitai.com/models/134643/blacklight-makeup-sdxl-lora) - [Tensor.Art](https://tensor.art/models/630245562870045528) - [Ko-Fi](https://ko-fi.com/s/9d846bf374) I really hope you enjoy this LoRA — and if you do, ***please click the "like" button!*** I will release a new model every time somebody [buys me a coffee on Ko-Fi](https://ko-fi.com/chillpixel). Want to hire me to train SDXL? I'm open to innovation and marketing opportunities. Contact me at [email protected]
SaadoN/bert-finetuned-squad
SaadoN
2023-09-02T13:14:39Z
122
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-09-02T10:57:32Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
LiChenYi/QA
LiChenYi
2023-09-02T13:05:16Z
0
0
null
[ "license:unknown", "region:us" ]
null
2023-09-02T12:55:15Z
--- license: unknown --- 在AI使用过程中,遇到的问题进行记录,供后来者避坑 # 2colab 使用过程的问题 1. 在colab中拉去 huggingface仓库中的数据报如下错误: Connecting to [huggingface.co](http://huggingface.co/) ([huggingface.co](http://huggingface.co/))|18.239.50.16|:443... connected. HTTP request sent, awaiting response... 401 Unauthorized 解决方案: 找到huggingface设置,用户的访问请求【User Access requests】:设置为禁用
unionhu/test1
unionhu
2023-09-02T12:56:55Z
0
0
allennlp
[ "allennlp", "chemistry", "token-classification", "en", "dataset:fka/awesome-chatgpt-prompts", "license:openrail", "region:us" ]
token-classification
2023-09-02T12:52:47Z
--- license: openrail datasets: - fka/awesome-chatgpt-prompts language: - en metrics: - bleu library_name: allennlp pipeline_tag: token-classification tags: - chemistry ---
astroid19/ppo-LunarLander-v2
astroid19
2023-09-02T12:46:19Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-09-02T12:45:58Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 284.82 +/- 21.66 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
HorcruxNo13/swinv2-small-patch4-window8-256-finetuned-eurosat
HorcruxNo13
2023-09-02T12:44:00Z
146
0
transformers
[ "transformers", "pytorch", "swinv2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swinv2-small-patch4-window8-256", "base_model:finetune:microsoft/swinv2-small-patch4-window8-256", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-09-02T12:25:25Z
--- license: apache-2.0 base_model: microsoft/swinv2-small-patch4-window8-256 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swinv2-small-patch4-window8-256-finetuned-eurosat results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.7333333333333333 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-small-patch4-window8-256-finetuned-eurosat This model is a fine-tuned version of [microsoft/swinv2-small-patch4-window8-256](https://huggingface.co/microsoft/swinv2-small-patch4-window8-256) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5868 - Accuracy: 0.7333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 8 | 1.1951 | 0.2667 | | 5.0901 | 2.0 | 16 | 1.4301 | 0.7333 | | 2.785 | 3.0 | 24 | 1.1514 | 0.2667 | | 0.8599 | 4.0 | 32 | 0.5810 | 0.7333 | | 0.6058 | 5.0 | 40 | 0.5868 | 0.7333 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
mademuhas/qlora-cabrita-joao
mademuhas
2023-09-02T12:32:23Z
0
0
null
[ "generated_from_trainer", "base_model:tiiuae/falcon-7b", "base_model:finetune:tiiuae/falcon-7b", "license:apache-2.0", "region:us" ]
null
2023-09-02T12:32:17Z
--- license: apache-2.0 base_model: tiiuae/falcon-7b tags: - generated_from_trainer model-index: - name: qlora-cabrita-joao results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # qlora-cabrita-joao This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.33.0.dev0 - Pytorch 2.0.0 - Datasets 2.14.4 - Tokenizers 0.13.3
simlamkr1/llama2-simtestmodel1
simlamkr1
2023-09-02T12:32:06Z
0
0
peft
[ "peft", "pytorch", "llama", "region:us" ]
null
2023-09-01T13:56:00Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.0.dev0
penguinman73/xlm-roberta-base-finetuned-panx-en
penguinman73
2023-09-02T12:25:02Z
103
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-09-02T12:22:08Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.4028 - F1: 0.6831 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1353 | 1.0 | 50 | 0.6267 | 0.5068 | | 0.5283 | 2.0 | 100 | 0.4369 | 0.6552 | | 0.358 | 3.0 | 150 | 0.4028 | 0.6831 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
NiscR/Reinforce-Pixel1
NiscR
2023-09-02T12:19:12Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-09-02T11:35:10Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixel1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 31.20 +/- 23.29 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
penguinman73/xlm-roberta-base-finetuned-panx-de-fr
penguinman73
2023-09-02T12:12:18Z
114
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-09-02T11:58:38Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1623 - F1: 0.8603 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2891 | 1.0 | 715 | 0.1813 | 0.8232 | | 0.1482 | 2.0 | 1430 | 0.1586 | 0.8462 | | 0.0959 | 3.0 | 2145 | 0.1623 | 0.8603 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
darthruebezahl/alicia02092023
darthruebezahl
2023-09-02T12:09:23Z
29
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-09-02T12:07:42Z
--- license: creativeml-openrail-m tags: - text-to-image widget: - text: Alicia02092023 --- ### Alicia02092023 Dreambooth model trained by darthruebezahl with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: Alicia02092023 (use that on your prompt) ![Alicia02092023 0](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%281%29.jpg)![Alicia02092023 1](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%282%29.jpg)![Alicia02092023 2](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%283%29.jpg)![Alicia02092023 3](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%284%29.jpg)![Alicia02092023 4](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%285%29.jpg)![Alicia02092023 5](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%286%29.jpg)![Alicia02092023 6](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%287%29.jpg)![Alicia02092023 7](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%288%29.jpg)![Alicia02092023 8](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%289%29.jpg)![Alicia02092023 9](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%2810%29.jpg)![Alicia02092023 10](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%2811%29.jpg)![Alicia02092023 11](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%2812%29.jpg)![Alicia02092023 12](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%2813%29.jpg)![Alicia02092023 13](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%2814%29.jpg)![Alicia02092023 14](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%2815%29.jpg)![Alicia02092023 15](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%2816%29.jpg)![Alicia02092023 16](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%2817%29.jpg)![Alicia02092023 17](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%2818%29.jpg)![Alicia02092023 18](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%2819%29.jpg)![Alicia02092023 19](https://huggingface.co/darthruebezahl/alicia02092023/resolve/main/concept_images/Alicia02092023_%2820%29.jpg)
fkc294/xlm-roberta-base-finetuned-panx-de
fkc294
2023-09-02T11:56:53Z
124
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-09-02T11:06:08Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.de split: validation args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8646808510638297 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1361 - F1: 0.8647 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2595 | 1.0 | 525 | 0.1540 | 0.8302 | | 0.1265 | 2.0 | 1050 | 0.1493 | 0.8468 | | 0.0806 | 3.0 | 1575 | 0.1361 | 0.8647 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
penguinman73/xlm-roberta-base-finetuned-panx-de
penguinman73
2023-09-02T11:56:10Z
3
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-08-27T01:35:12Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2992 - F1: 0.8285 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6098 | 1.0 | 167 | 0.3570 | 0.7592 | | 0.2633 | 2.0 | 334 | 0.2995 | 0.8171 | | 0.1792 | 3.0 | 501 | 0.2992 | 0.8285 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
amgodbole/bloom_prompt_tuning_1693653323.8270018
amgodbole
2023-09-02T11:36:37Z
1
0
peft
[ "peft", "region:us" ]
null
2023-09-02T11:36:36Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0
softaken/softaken-dbx-to-pst-converter
softaken
2023-09-02T11:35:00Z
0
0
null
[ "region:us" ]
null
2023-09-02T11:18:55Z
Softaken DBX to PST Converter Software is a convenient computer program to export Outlook Express emails to Outlook PST file format. There are Users can export single and multiple DBX files and folders to Outlook PST file format. No need for any technical knowledge to operate this software, and convert DBX files to PST file format. Users can export unlimited DBX file conversion without any data limitation. The conversion tool provides a complete preview of the DBX file before the beginning of the conversion process. Users can export DBX files into multiple other world-famous file formats such as; PST, EML, EMLX, MSG, MBOX, etc. The software can also work with multiple MS Outlook versions such as; 2002, 2003, 2007, 2010, 2013, 2016, and 2019. Users can save their exported data as per the required location on the desktop. This is Windows-based tool that can work with all Windows systems such as; Windows 11, Windows 10 S, Windows 10, Windows 8/8.1, Windows 7, Windows Vista, Windows XP, and Windows 2000, etc. Grab the free demo version of this software to learn more features and functions of the software. Read More: https://www.softaken.com/dbx-to-pst-converter
casque/FilmVelvia3
casque
2023-09-02T11:34:13Z
0
1
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-09-02T11:32:49Z
--- license: creativeml-openrail-m ---
casque/InstantPhotoX3
casque
2023-09-02T11:16:04Z
0
1
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-09-02T11:14:33Z
--- license: creativeml-openrail-m ---
dwitidibyajyoti/fine_tune_layoutmlv3_model
dwitidibyajyoti
2023-09-02T11:15:36Z
77
0
transformers
[ "transformers", "pytorch", "layoutlmv3", "token-classification", "generated_from_trainer", "base_model:microsoft/layoutlmv3-base", "base_model:finetune:microsoft/layoutlmv3-base", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-08-30T09:45:10Z
--- license: cc-by-nc-sa-4.0 base_model: microsoft/layoutlmv3-base tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2763 - Precision: 0.5109 - Recall: 0.6026 - F1: 0.5529 - Accuracy: 0.9222 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 8.33 | 100 | 0.6800 | 0.3371 | 0.3846 | 0.3593 | 0.7682 | | No log | 16.67 | 200 | 0.3088 | 0.5204 | 0.6538 | 0.5795 | 0.9156 | | No log | 25.0 | 300 | 0.2142 | 0.5326 | 0.6282 | 0.5765 | 0.9305 | | No log | 33.33 | 400 | 0.2301 | 0.5795 | 0.6538 | 0.6145 | 0.9288 | | 0.4115 | 41.67 | 500 | 0.2426 | 0.5618 | 0.6410 | 0.5988 | 0.9272 | | 0.4115 | 50.0 | 600 | 0.4171 | 0.6190 | 0.6667 | 0.6420 | 0.8924 | | 0.4115 | 58.33 | 700 | 0.2265 | 0.5393 | 0.6154 | 0.5749 | 0.9371 | | 0.4115 | 66.67 | 800 | 0.2869 | 0.5506 | 0.6282 | 0.5868 | 0.9156 | | 0.4115 | 75.0 | 900 | 0.2633 | 0.5568 | 0.6282 | 0.5904 | 0.9272 | | 0.0231 | 83.33 | 1000 | 0.2763 | 0.5109 | 0.6026 | 0.5529 | 0.9222 | ### Framework versions - Transformers 4.33.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
KhalfounMehdi/vit_musculoskeletal_abnormality_detection_mura_224px_16bs_20ep
KhalfounMehdi
2023-09-02T11:10:51Z
191
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "autotrain", "dataset:KhalfounMehdi/mura_dataset_processed_224px_train_val", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-09-02T11:10:27Z
--- tags: - autotrain - image-classification widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace datasets: - KhalfounMehdi/mura_dataset_processed_224px_train_val --- # Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: 0.5185230374336243 f1: 0.8211164615658998 precision: 0.7175810473815462 recall: 0.9595664860358483 auc: 0.7988417458585272 accuracy: 0.749312671832042
aigrils2/primitive0-diffuser
aigrils2
2023-09-02T11:05:44Z
29
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "lora", "base_model:wangjun/majicmix-realistic-v6", "base_model:adapter:wangjun/majicmix-realistic-v6", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-09-02T10:20:37Z
--- base_model: wangjun/majicmix-realistic-v6 tags: - text-to-image - stable-diffusion - lora - diffusers pipeline_tag: text-to-image ---
casque/majicmixRealistic_betterV2V25
casque
2023-09-02T11:00:36Z
0
1
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-09-02T10:43:18Z
--- license: creativeml-openrail-m ---
Tharun2003/tharun-3
Tharun2003
2023-09-02T10:57:42Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T10:53:06Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.0.dev0
andrewcho92/helloworld
andrewcho92
2023-09-02T10:33:10Z
0
0
null
[ "text-generation", "en", "license:openrail", "region:us" ]
text-generation
2023-09-02T10:14:37Z
--- license: openrail language: - en pipeline_tag: text-generation ---
adimazuz/texi-v3
adimazuz
2023-09-02T10:30:56Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-09-02T10:30:54Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: texi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="adimazuz/texi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
adimazuz/q-FrozenLake-v1-4x4-noSlippery
adimazuz
2023-09-02T10:23:17Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-09-02T10:23:15Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="adimazuz/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
jigglesaw/finetuning-sentiment-model-3000-samples
jigglesaw
2023-09-02T10:16:22Z
106
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-02T08:56:24Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8666666666666667 - name: F1 type: f1 value: 0.870967741935484 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3394 - Accuracy: 0.8667 - F1: 0.8710 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
gg4ever/trOCR-final
gg4ever
2023-09-02T10:15:40Z
126
0
transformers
[ "transformers", "pytorch", "vision-encoder-decoder", "image-text-to-text", "image-to-text", "ko", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-to-text
2023-08-22T11:31:10Z
--- license: apache-2.0 language: - ko metrics: - cer - wer pipeline_tag: image-to-text --- # trOCR-final fine-tuned for VisionEncoderDecoderModel(encoder , decoder) encoder = 'facebook/deit-base-distilled-patch16-384' decoder = 'klue/roberta-base' ## How to Get Started with the Model ```python from transformers import VisionEncoderDecoderModel,AutoTokenizer, TrOCRProcessor import torch from PIL import Image device = torch.device('cuda') # change 'cuda' if you need. image_path='(your image path)' image = Image.open(image_path) #model can be .jpg or .png #hugging face download: https://huggingface.co/gg4ever/trOCR-final processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten") trocr_model = "gg4ever/trOCR-final" model = VisionEncoderDecoderModel.from_pretrained(trocr_model).to(device) tokenizer = AutoTokenizer.from_pretrained(trocr_model) pixel_values = (processor(image, return_tensors="pt").pixel_values).to(device) generated_ids = model.generate(pixel_values) generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(generated_text) ``` ## Training Details ### Training Data 1M words generated by TextRecognitionDataGenerator(trdg) : https://github.com/Belval/TextRecognitionDataGenerator/blob/master/trdg/run.py 1.1M words from AI-hub OCR words dataset : https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&dataSetSn=81 ### Training Hyperparameters |hyperparameters|values| |-----------------------------|-------| |predict_with_generate|True| |evaluation_strategy|"steps"| |per_device_train_batch_size|32| |per_device_eval_batch_size|32| |num_train_epochs|2| |fp16|True| |learning_rate|4e-5| |eval_stept|10000| |warmup_steps|20000| |weight_decay|0.01|
Lilsunx/sabari
Lilsunx
2023-09-02T10:15:00Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T10:13:38Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.0.dev0
muralee491/murale
muralee491
2023-09-02T10:14:33Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-02T10:12:40Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.0.dev0
StefanoCaloni/dqn-SpaceInvaders
StefanoCaloni
2023-09-02T10:04:52Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-09-02T08:32:06Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 299.00 +/- 68.26 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga StefanoCaloni -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga StefanoCaloni -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga StefanoCaloni ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 10000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 10000), ('n_timesteps', 100000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 100), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
andrei-saceleanu/detr-resnet-50_finetuned_cppe5
andrei-saceleanu
2023-09-02T10:00:41Z
187
0
transformers
[ "transformers", "pytorch", "detr", "object-detection", "generated_from_trainer", "dataset:cppe-5", "base_model:facebook/detr-resnet-50", "base_model:finetune:facebook/detr-resnet-50", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2023-09-02T09:07:57Z
--- license: apache-2.0 base_model: facebook/detr-resnet-50 tags: - generated_from_trainer datasets: - cppe-5 model-index: - name: detr-resnet-50_finetuned_cppe5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
fathercc/majiczhenshi
fathercc
2023-09-02T09:16:46Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-08-02T12:23:04Z
--- license: creativeml-openrail-m ---
MP-1961/vit-base-patch16-224-finetuned-flower
MP-1961
2023-09-02T09:13:52Z
163
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-09-02T09:03:21Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder model-index: - name: vit-base-patch16-224-finetuned-flower results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-flower This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 2.0.1+cu118 - Datasets 2.7.1 - Tokenizers 0.13.3
franziskaM/b25-wav2vec2-large-xls-r-romansh-colab
franziskaM
2023-09-02T08:58:53Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_13_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-09-01T10:20:50Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice_13_0 metrics: - wer model-index: - name: b25-wav2vec2-large-xls-r-romansh-colab results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_13_0 type: common_voice_13_0 config: rm-vallader split: test args: rm-vallader metrics: - name: Wer type: wer value: 0.24149976711690732 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # b25-wav2vec2-large-xls-r-romansh-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_13_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3303 - Wer: 0.2415 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6.1605 | 3.05 | 400 | 2.9535 | 1.0 | | 2.9451 | 6.11 | 800 | 2.9092 | 1.0 | | 1.7795 | 9.16 | 1200 | 0.4982 | 0.4951 | | 0.4094 | 12.21 | 1600 | 0.3883 | 0.3575 | | 0.2374 | 15.27 | 2000 | 0.3151 | 0.2876 | | 0.1674 | 18.32 | 2400 | 0.3284 | 0.2783 | | 0.1385 | 21.37 | 2800 | 0.3408 | 0.2641 | | 0.1133 | 24.43 | 3200 | 0.3355 | 0.2538 | | 0.1015 | 27.48 | 3600 | 0.3303 | 0.2415 | ### Framework versions - Transformers 4.26.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
Yntec/DreamLikeRemix
Yntec
2023-09-02T08:58:22Z
420
3
diffusers
[ "diffusers", "safetensors", "anime", "Dreamlike", "art", "Retro", "Elldreths", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "license:other", "autotrain_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-11T14:26:00Z
--- license: other language: - en library_name: diffusers pipeline_tag: text-to-image tags: - anime - Dreamlike - art - Retro - Elldreths - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: false --- # DreamLikeRemix Samples and prompts: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/UaWl0HP-FhNaqWs9Uqvr9.png) ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/PQahHQE7YSNQ-wfeBhIag.png) beautiful background, beautiful detailed girl, Cartoon Pretty CUTE Girl, sitting on a box of cherries, DETAILED CHIBI EYES, holding antique slot machine, detailed hair, Ponytail, key shot at computer monitor, Magazine ad, iconic, 1940, sharp focus. Acrylic art on canvas By KlaysMoji and artgerm and Clay Mann and and leyendecker A mix of Dreamlike Diffusion and a little bit of Elldreths Retro Mix. Full recipe: # Add Difference 1.0 Primary model: Dreamlike Diffusion Secondary model: Elldreths Retro Mix Tertiary model: v1-5-pruned-fp16-no-ema Output Model: Temporary # Weighted Sum 0.85 Primary model: Temporary Secondary model: Dreamlike Diffusion Output Model: dreamLikeRemix Original pages: https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0 https://civitai.com/models/1474/elldreths-retro-mix