modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-21 12:34:09
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
568 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-21 12:33:58
card
stringlengths
11
1.01M
nbelle/bert-finetuned-squad
nbelle
2024-02-20T19:39:17Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "question-answering", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-02-19T21:10:26Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
askenaz/results6071251431939632204
askenaz
2024-02-20T19:35:43Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:adapter:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2024-02-20T19:35:36Z
--- library_name: peft tags: - generated_from_trainer base_model: meta-llama/Llama-2-7b-chat-hf model-index: - name: results6071251431939632204 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results6071251431939632204 This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 10 ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.1
askenaz/results-4140812489330439434
askenaz
2024-02-20T19:20:31Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:adapter:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2024-02-20T19:20:24Z
--- library_name: peft tags: - generated_from_trainer base_model: meta-llama/Llama-2-7b-chat-hf model-index: - name: results-4140812489330439434 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results-4140812489330439434 This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 10 ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.1
Sousa1/asasasaas
Sousa1
2024-02-20T19:19:01Z
0
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-cascade", "base_model:adapter:stabilityai/stable-cascade", "region:us" ]
text-to-image
2024-02-20T19:18:54Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: '-' output: url: images/OIP-removebg-preview (1).png base_model: stabilityai/stable-cascade instance_prompt: null --- # asa <Gallery /> ## Download model [Download](/Sousa1/asasasaas/tree/main) them in the Files & versions tab.
ErnestBeckham/BreastResViT
ErnestBeckham
2024-02-20T19:03:07Z
0
0
keras
[ "keras", "tf-keras", "region:us" ]
null
2024-02-20T19:01:48Z
--- library_name: keras --- ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: | Hyperparameters | Value | | :-- | :-- | | name | Adam | | weight_decay | None | | clipnorm | None | | global_clipnorm | None | | clipvalue | 1.0 | | use_ema | False | | ema_momentum | 0.99 | | ema_overwrite_frequency | None | | jit_compile | True | | is_legacy_optimizer | False | | learning_rate | 9.999999974752427e-07 | | beta_1 | 0.9 | | beta_2 | 0.999 | | epsilon | 1e-07 | | amsgrad | False | | training_precision | float32 | ## Model Plot <details> <summary>View Model Plot</summary> ![Model Image](./model.png) </details>
nubby/kl-f8-anime2-blessed
nubby
2024-02-20T19:03:03Z
179
14
diffusers
[ "diffusers", "safetensors", "license:creativeml-openrail-m", "region:us" ]
null
2023-05-04T19:17:01Z
--- license: creativeml-openrail-m --- These VAEs are modified versions of the popular [kl-f8-anime2 vae](https://huggingface.co/hakurei/waifu-diffusion-v1-4). These VAEs should not produce a NaN in VAE error even when used on half precision. They have been modified using the [VAE-BlessUp script](https://github.com/sALTaccount/VAE-BlessUp) to produce lower contrast images than the original version. ## The recommended version is [WD1-4-kl-f8-anime2-bless09.safetensors](https://huggingface.co/nubby/kl-f8-anime2-blessed/blob/main/WD1-4-kl-f8-anime2-bless09.safetensors) Other versions are provided to accomodate different models and different tastes. If you typically prefer the very low contrast of the NAI/Anything vae then you may want to try the 08 version. [WD1-4-kl-f8-anime2-bless08.safetensors](https://huggingface.co/nubby/kl-f8-anime2-blessed/blob/main/WD1-4-kl-f8-anime2-bless08.safetensors) = 80 contrast of original/20% decrease [WD1-4-kl-f8-anime2-bless09.safetensors](https://huggingface.co/nubby/kl-f8-anime2-blessed/blob/main/WD1-4-kl-f8-anime2-bless09.safetensors) = 90% contrast of original/10% decrease [WD1-4-kl-f8-anime2-bless1-05.safetensors](https://huggingface.co/nubby/kl-f8-anime2-blessed/blob/main/WD1-4-kl-f8-anime2-bless1-05.safetensors) = 105% contrast of original/5% increase [WD1-4-kl-f8-anime2-bless1-1.safetensors](https://huggingface.co/nubby/kl-f8-anime2-blessed/blob/main/WD1-4-kl-f8-anime2-bless1-1.safetensors) = 110% contrast of original/10% increase
askenaz/results_-1949220622505963237
askenaz
2024-02-20T19:02:38Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:adapter:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2024-02-20T19:02:32Z
--- library_name: peft tags: - generated_from_trainer base_model: meta-llama/Llama-2-7b-chat-hf model-index: - name: results_-1949220622505963237 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results_-1949220622505963237 This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 10 ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.1
yvelos/Annotator_1_La
yvelos
2024-02-20T18:58:58Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-02-20T16:35:34Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
platero/q-Taxi-v3
platero
2024-02-20T18:52:40Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-02-20T18:52:38Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.46 +/- 2.80 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="platero/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
LiukG/gut_1024-finetuned-lora-NT-v2-500m-ms
LiukG
2024-02-20T18:46:01Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "esm", "text-classification", "generated_from_trainer", "custom_code", "base_model:InstaDeepAI/nucleotide-transformer-v2-500m-multi-species", "base_model:finetune:InstaDeepAI/nucleotide-transformer-v2-500m-multi-species", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-20T18:44:02Z
--- license: cc-by-nc-sa-4.0 base_model: InstaDeepAI/nucleotide-transformer-v2-500m-multi-species tags: - generated_from_trainer metrics: - f1 - matthews_correlation - accuracy model-index: - name: gut_1024-finetuned-lora-NT-v2-500m-multi-species results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gut_1024-finetuned-lora-NT-v2-500m-multi-species This model is a fine-tuned version of [InstaDeepAI/nucleotide-transformer-v2-500m-multi-species](https://huggingface.co/InstaDeepAI/nucleotide-transformer-v2-500m-multi-species) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4480 - F1: 0.8532 - Matthews Correlation: 0.6018 - Accuracy: 0.8091 - F1 Score: 0.8532 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Matthews Correlation | Accuracy | F1 Score | |:-------------:|:-----:|:----:|:---------------:|:------:|:--------------------:|:--------:|:--------:| | 0.7913 | 0.02 | 100 | 0.6865 | 0.7478 | 0.0 | 0.5971 | 0.7478 | | 0.6762 | 0.04 | 200 | 0.7888 | 0.6217 | 0.3157 | 0.6284 | 0.6217 | | 0.6291 | 0.05 | 300 | 0.5765 | 0.7628 | 0.4323 | 0.7234 | 0.7628 | | 0.563 | 0.07 | 400 | 0.5184 | 0.8304 | 0.5258 | 0.7724 | 0.8304 | | 0.5206 | 0.09 | 500 | 0.5402 | 0.8281 | 0.5142 | 0.7580 | 0.8281 | | 0.4639 | 0.11 | 600 | 0.4681 | 0.8461 | 0.5775 | 0.7969 | 0.8461 | | 0.4359 | 0.12 | 700 | 0.5136 | 0.8470 | 0.5774 | 0.7918 | 0.8470 | | 0.4861 | 0.14 | 800 | 0.4530 | 0.8365 | 0.5714 | 0.7965 | 0.8365 | | 0.4923 | 0.16 | 900 | 0.4480 | 0.8496 | 0.5889 | 0.8024 | 0.8496 | | 0.4369 | 0.18 | 1000 | 0.4480 | 0.8532 | 0.6018 | 0.8091 | 0.8532 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
Leul78/sfttrial4
Leul78
2024-02-20T18:42:12Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-20T18:38:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
julia-wenkmann/segformer-b0-finetuned-segments-sidewalk-oct-22
julia-wenkmann
2024-02-20T18:41:25Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "segformer", "vision", "image-segmentation", "generated_from_trainer", "base_model:nvidia/mit-b0", "base_model:finetune:nvidia/mit-b0", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
2024-02-18T16:59:36Z
--- license: other base_model: nvidia/mit-b0 tags: - vision - image-segmentation - generated_from_trainer model-index: - name: segformer-b0-finetuned-segments-sidewalk-oct-22 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b0-finetuned-segments-sidewalk-oct-22 This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the julia-wenkmann/TennisSegmentation dataset. It achieves the following results on the evaluation set: - Loss: 0.0577 - Mean Iou: 0.1977 - Mean Accuracy: 0.2635 - Overall Accuracy: 0.5273 - Accuracy Undefined: nan - Accuracy Ball: 0.0 - Accuracy Playertop: 0.1196 - Accuracy Playerbottom: 0.6710 - Iou Undefined: 0.0 - Iou Ball: 0.0 - Iou Playertop: 0.1196 - Iou Playerbottom: 0.6710 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Undefined | Accuracy Ball | Accuracy Playertop | Accuracy Playerbottom | Iou Undefined | Iou Ball | Iou Playertop | Iou Playerbottom | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:-------------:|:------------------:|:---------------------:|:-------------:|:--------:|:-------------:|:----------------:| | 1.0528 | 1.67 | 20 | 1.1784 | 0.1882 | 0.2528 | 0.5396 | nan | 0.0 | 0.0471 | 0.7115 | 0.0 | 0.0 | 0.0411 | 0.7115 | | 0.8484 | 3.33 | 40 | 0.8579 | 0.1435 | 0.1914 | 0.4261 | nan | 0.0 | 0.0 | 0.5741 | 0.0 | 0.0 | 0.0 | 0.5741 | | 0.6695 | 5.0 | 60 | 0.6303 | 0.1414 | 0.1885 | 0.4196 | nan | 0.0 | 0.0 | 0.5654 | 0.0 | 0.0 | 0.0 | 0.5654 | | 0.5808 | 6.67 | 80 | 0.5370 | 0.0901 | 0.1201 | 0.2674 | nan | 0.0 | 0.0 | 0.3603 | 0.0 | 0.0 | 0.0 | 0.3603 | | 0.4415 | 8.33 | 100 | 0.4385 | 0.1263 | 0.1685 | 0.3751 | nan | 0.0 | 0.0 | 0.5054 | 0.0 | 0.0 | 0.0 | 0.5054 | | 0.3955 | 10.0 | 120 | 0.3449 | 0.1177 | 0.1570 | 0.3496 | nan | 0.0 | 0.0 | 0.4710 | 0.0 | 0.0 | 0.0 | 0.4710 | | 0.3597 | 11.67 | 140 | 0.3006 | 0.1186 | 0.1582 | 0.3521 | nan | 0.0 | 0.0 | 0.4745 | 0.0 | 0.0 | 0.0 | 0.4745 | | 0.2761 | 13.33 | 160 | 0.2592 | 0.0968 | 0.1290 | 0.2873 | nan | 0.0 | 0.0 | 0.3871 | 0.0 | 0.0 | 0.0 | 0.3871 | | 0.2343 | 15.0 | 180 | 0.2044 | 0.0987 | 0.1316 | 0.2930 | nan | 0.0 | 0.0 | 0.3948 | 0.0 | 0.0 | 0.0 | 0.3948 | | 0.1953 | 16.67 | 200 | 0.1841 | 0.1258 | 0.1678 | 0.3736 | nan | 0.0 | 0.0 | 0.5033 | 0.0 | 0.0 | 0.0 | 0.5033 | | 0.1676 | 18.33 | 220 | 0.1558 | 0.1394 | 0.1858 | 0.4137 | nan | 0.0 | 0.0 | 0.5574 | 0.0 | 0.0 | 0.0 | 0.5574 | | 0.1486 | 20.0 | 240 | 0.1392 | 0.1448 | 0.1931 | 0.4299 | nan | 0.0 | 0.0 | 0.5792 | 0.0 | 0.0 | 0.0 | 0.5792 | | 0.1281 | 21.67 | 260 | 0.1234 | 0.1497 | 0.1996 | 0.4445 | nan | 0.0 | 0.0 | 0.5989 | 0.0 | 0.0 | 0.0 | 0.5989 | | 0.1138 | 23.33 | 280 | 0.1064 | 0.1407 | 0.1877 | 0.4178 | nan | 0.0 | 0.0 | 0.5630 | 0.0 | 0.0 | 0.0 | 0.5630 | | 0.1031 | 25.0 | 300 | 0.0952 | 0.1495 | 0.1993 | 0.4438 | nan | 0.0 | 0.0 | 0.5979 | 0.0 | 0.0 | 0.0 | 0.5979 | | 0.094 | 26.67 | 320 | 0.0896 | 0.1500 | 0.2000 | 0.4454 | nan | 0.0 | 0.0 | 0.6001 | 0.0 | 0.0 | 0.0 | 0.6001 | | 0.0873 | 28.33 | 340 | 0.0889 | 0.1677 | 0.2236 | 0.4978 | nan | 0.0 | 0.0 | 0.6707 | 0.0 | 0.0 | 0.0 | 0.6707 | | 0.0822 | 30.0 | 360 | 0.0772 | 0.1631 | 0.2175 | 0.4843 | nan | 0.0 | 0.0 | 0.6526 | 0.0 | 0.0 | 0.0 | 0.6526 | | 0.0769 | 31.67 | 380 | 0.0739 | 0.1589 | 0.2119 | 0.4718 | nan | 0.0 | 0.0 | 0.6358 | 0.0 | 0.0 | 0.0 | 0.6358 | | 0.0798 | 33.33 | 400 | 0.0694 | 0.1603 | 0.2137 | 0.4758 | nan | 0.0 | 0.0 | 0.6411 | 0.0 | 0.0 | 0.0 | 0.6411 | | 0.0704 | 35.0 | 420 | 0.0654 | 0.1680 | 0.2241 | 0.4910 | nan | 0.0 | 0.0158 | 0.6564 | 0.0 | 0.0 | 0.0158 | 0.6564 | | 0.0653 | 36.67 | 440 | 0.0633 | 0.1708 | 0.2278 | 0.4957 | nan | 0.0 | 0.0229 | 0.6604 | 0.0 | 0.0 | 0.0229 | 0.6604 | | 0.0648 | 38.33 | 460 | 0.0610 | 0.1743 | 0.2324 | 0.5013 | nan | 0.0 | 0.0325 | 0.6648 | 0.0 | 0.0 | 0.0325 | 0.6648 | | 0.0616 | 40.0 | 480 | 0.0598 | 0.1859 | 0.2479 | 0.5189 | nan | 0.0 | 0.0664 | 0.6773 | 0.0 | 0.0 | 0.0664 | 0.6773 | | 0.0612 | 41.67 | 500 | 0.0586 | 0.1887 | 0.2517 | 0.5203 | nan | 0.0 | 0.0805 | 0.6745 | 0.0 | 0.0 | 0.0805 | 0.6745 | | 0.0693 | 43.33 | 520 | 0.0584 | 0.1953 | 0.2604 | 0.5300 | nan | 0.0 | 0.1003 | 0.6810 | 0.0 | 0.0 | 0.1003 | 0.6810 | | 0.0595 | 45.0 | 540 | 0.0567 | 0.1998 | 0.2664 | 0.5354 | nan | 0.0 | 0.1161 | 0.6831 | 0.0 | 0.0 | 0.1161 | 0.6831 | | 0.0564 | 46.67 | 560 | 0.0556 | 0.2007 | 0.2676 | 0.5378 | nan | 0.0 | 0.1165 | 0.6862 | 0.0 | 0.0 | 0.1165 | 0.6862 | | 0.0608 | 48.33 | 580 | 0.0555 | 0.2032 | 0.2710 | 0.5412 | nan | 0.0 | 0.1249 | 0.6880 | 0.0 | 0.0 | 0.1249 | 0.6880 | | 0.0599 | 50.0 | 600 | 0.0577 | 0.1977 | 0.2635 | 0.5273 | nan | 0.0 | 0.1196 | 0.6710 | 0.0 | 0.0 | 0.1196 | 0.6710 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
anupk/AskPaul-V1
anupk
2024-02-20T18:36:34Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-02-20T18:33:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
didoM/dido
didoM
2024-02-20T18:20:43Z
0
0
null
[ "bg", "license:apache-2.0", "region:us" ]
null
2024-02-20T18:16:18Z
--- license: apache-2.0 language: - bg ---
dsteiner93/poca-SoccerTwos
dsteiner93
2024-02-20T18:18:07Z
48
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2024-02-20T18:17:57Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: dsteiner93/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
llmixer/BigWeave-v7.1-124b
llmixer
2024-02-20T18:17:54Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "frankenmerge", "merge", "124b", "conversational", "en", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-20T12:42:23Z
--- license: llama2 language: - en pipeline_tag: conversational tags: - frankenmerge - merge - 124b --- # BigWeave v7.1 124b <img src="https://cdn-uploads.huggingface.co/production/uploads/65a6db055c58475cf9e6def1/4CbbAN-X7ZWj702JrcCGH.png" width=600> The BigWeave models aim to experimentally identify merge settings for increasing model performance. The version number merely tracks various attempts and is not a quality indicator. Only results demonstrating good performance are retained and shared. # Prompting Format Vicuna and Alpaca. # Merge process This is a merge of Xwin-LM/Xwin-LM-70B-V0.1 and Sao10K/Euryale-1.3-L2-70B. It uses the same configuration as alpindale/goliath-120b but with the ranges "fixed" since goliath omits some layers (see [this thread](https://huggingface.co/ChuckMcSneed/WinterGoliath-123b/discussions/2#65d324693ecda975d83089d3)). Merge configuration: ``` slices: - sources: - model: Xwin-LM/Xwin-LM-70B-V0.1 layer_range: [0,16] - sources: - model: Sao10K/Euryale-1.3-L2-70B layer_range: [8,24] - sources: - model: Xwin-LM/Xwin-LM-70B-V0.1 layer_range: [16,32] - sources: - model: Sao10K/Euryale-1.3-L2-70B layer_range: [24,40] - sources: - model: Xwin-LM/Xwin-LM-70B-V0.1 layer_range: [32,48] - sources: - model: Sao10K/Euryale-1.3-L2-70B layer_range: [40,56] - sources: - model: Xwin-LM/Xwin-LM-70B-V0.1 layer_range: [48,64] - sources: - model: Sao10K/Euryale-1.3-L2-70B layer_range: [56,72] - sources: - model: Xwin-LM/Xwin-LM-70B-V0.1 layer_range: [64,80] merge_method: passthrough dtype: float16 ```
sgonzalezsilot/FakeNewsDetection_Zenodo_NER
sgonzalezsilot
2024-02-20T18:14:29Z
4
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-20T18:04:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
pemoreto7/q-Taxi-pemoreto
pemoreto7
2024-02-20T18:12:28Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-02-20T18:12:27Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-pemoreto results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="pemoreto7/q-Taxi-pemoreto", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
x28x28x28/chikolita
x28x28x28
2024-02-20T18:10:47Z
1
0
diffusers
[ "diffusers", "text-to-image", "autotrain", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2024-02-20T17:34:07Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: hot girl tags: - text-to-image - diffusers - autotrain inference: true --- # DreamBooth trained by AutoTrain Text encoder was not trained.
pemoreto7/q-FrozenLake-v1-4x4-noSlippery
pemoreto7
2024-02-20T18:08:50Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-02-20T18:08:48Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="pemoreto7/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ITT-AF/ITT-42dot_LLM-PLM-1.3B-v4.0
ITT-AF
2024-02-20T18:07:45Z
57
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-20T16:32:43Z
--- license: cc-by-nc-4.0 --- # ITT-AF/ITT-42dot_LLM-PLM-1.3B-v4.0 This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on an custom dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 24 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 96 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.0.0 - Tokenizers 0.15.0
surajkarki/ft-GPT2-with-lyrics
surajkarki
2024-02-20T18:02:07Z
7
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-20T18:01:49Z
--- license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: ft-GPT2-with-lyrics results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ft-GPT2-with-lyrics This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1112 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0058 | 1.0 | 71 | 1.5293 | | 1.5797 | 2.0 | 142 | 1.3814 | | 1.4549 | 3.0 | 213 | 1.3093 | | 1.3837 | 4.0 | 284 | 1.2661 | | 1.3354 | 5.0 | 355 | 1.2328 | | 1.2976 | 6.0 | 426 | 1.2142 | | 1.2678 | 7.0 | 497 | 1.1973 | | 1.2401 | 8.0 | 568 | 1.1825 | | 1.2168 | 9.0 | 639 | 1.1737 | | 1.1971 | 10.0 | 710 | 1.1604 | | 1.1797 | 11.0 | 781 | 1.1529 | | 1.1671 | 12.0 | 852 | 1.1451 | | 1.1517 | 13.0 | 923 | 1.1395 | | 1.1389 | 14.0 | 994 | 1.1357 | | 1.1267 | 15.0 | 1065 | 1.1325 | | 1.1189 | 16.0 | 1136 | 1.1278 | | 1.1076 | 17.0 | 1207 | 1.1246 | | 1.1019 | 18.0 | 1278 | 1.1201 | | 1.0925 | 19.0 | 1349 | 1.1201 | | 1.0885 | 20.0 | 1420 | 1.1159 | | 1.0806 | 21.0 | 1491 | 1.1165 | | 1.0785 | 22.0 | 1562 | 1.1178 | | 1.0703 | 23.0 | 1633 | 1.1171 | | 1.0674 | 24.0 | 1704 | 1.1130 | | 1.0604 | 25.0 | 1775 | 1.1132 | | 1.062 | 26.0 | 1846 | 1.1120 | | 1.0545 | 27.0 | 1917 | 1.1123 | | 1.0534 | 28.0 | 1988 | 1.1106 | | 1.0528 | 29.0 | 2059 | 1.1115 | | 1.0516 | 30.0 | 2130 | 1.1112 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.2 - Datasets 2.15.0 - Tokenizers 0.15.1
Yntec/DreamWorksRemix
Yntec
2024-02-20T17:52:39Z
781
4
diffusers
[ "diffusers", "safetensors", "General", "Cinematic", "CGI", "Animation", "tyzehd893", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-02-10T09:27:41Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - General - Cinematic - CGI - Animation - tyzehd893 - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image --- # DreamWorks Remix A mix of DreamWorks and DreamWorks Diffusion to produce a model that responds well to any prompt and not just those that have "Dreamworks Artstyle" included (I had to put crying emoticons over the faces of Dreamworks Diffusion's comparison image to show what I mean.) Comparison: ![Top text to image DreamWorks Remix Samples](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/EPW48KVWIh6DWefEBgbt7.png) (Click for larger) Samples and prompts: ![Free AI image generator DreamWorks Remix](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/AU3uA-yufsvZ6yzKk5qZ8.png) (Click for larger) Top left: Father with daughter. festive scene at a copper brewery with a wooden keg of beer in the center. Pretty cute little girl sitting with Santa Claus chef. Display mugs of dark beer accompanied by colorful happy halloween ingredients Top right: blonde pretty Princess Peach in the mushroom kingdom Bottom left: Dreamworks artstyle, baby pig Bottom right: cute little Edith from Despicable Me from_side pixar dreamworks movie scene plaid skirt sneakers playing with her sister Agnes in the backyard bright sunny day (masterpiece) (CGI) (best quality) (detailed) (intricate) (8k) (HDR) (cinematic lighting) (sharp focus) Original pages: https://civitai.com/models/74343/dreamworks-diffusion https://huggingface.co/Yntec/DreamWorks # Recipe: - SuperMerger Weight sum MBW 0,0,0,0,0,0,0,1,1,1,1,1,1,1,0,0,0,0,1,1,1,1,1,1,0,0 Model A: DreamWorks Diffusion Model B: DreamWorks Output: DreamWorks Remix
yvelos/Annotator
yvelos
2024-02-20T17:51:54Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-02-19T01:03:58Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
mnavas/llama2-test
mnavas
2024-02-20T17:47:14Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-02-20T17:47:13Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
unanam/smallloraft-v5
unanam
2024-02-20T17:46:49Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "arxiv:1910.09700", "base_model:openai/whisper-small", "base_model:adapter:openai/whisper-small", "region:us" ]
null
2024-02-19T08:54:39Z
--- library_name: peft base_model: openai/whisper-small --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
sgonzalezsilot/FakeNewsDetection_Constraint_NER
sgonzalezsilot
2024-02-20T17:42:06Z
4
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-20T17:27:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
glamprou/switch-base-8-sst2
glamprou
2024-02-20T17:37:16Z
4
0
transformers
[ "transformers", "safetensors", "switch_transformers", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:google/switch-base-8", "base_model:finetune:google/switch-base-8", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-20T17:15:19Z
--- language: - en license: apache-2.0 base_model: google/switch-base-8 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: sst2 results: - task: name: Text Classification type: text-classification dataset: name: GLUE SST2 type: glue args: sst2 metrics: - name: Accuracy type: accuracy value: 0.9461009174311926 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sst2 This model is a fine-tuned version of [google/switch-base-8](https://huggingface.co/google/switch-base-8) on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.1893 - Accuracy: 0.9461 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.38.0.dev0 - Pytorch 2.1.1 - Datasets 2.15.0 - Tokenizers 0.15.0
dah1214/faset-perf-whisper-medium-tw
dah1214
2024-02-20T17:36:35Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openai/whisper-large-v2", "base_model:adapter:openai/whisper-large-v2", "region:us" ]
null
2024-02-19T02:56:46Z
--- library_name: peft base_model: openai/whisper-large-v2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
sai17/cards-top_right_swin-tiny-patch4-window7-224-finetuned-v2_more_data
sai17
2024-02-20T17:23:19Z
593
0
transformers
[ "transformers", "tensorboard", "safetensors", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swin-tiny-patch4-window7-224", "base_model:finetune:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-02-19T09:26:21Z
--- license: apache-2.0 base_model: microsoft/swin-tiny-patch4-window7-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: cards-top_right_swin-tiny-patch4-window7-224-finetuned-v2_more_data results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.6269272417882741 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cards-top_right_swin-tiny-patch4-window7-224-finetuned-v2_more_data This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9268 - Accuracy: 0.6269 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.4585 | 1.0 | 1363 | 1.2999 | 0.4337 | | 1.4211 | 2.0 | 2726 | 1.1663 | 0.4927 | | 1.4203 | 3.0 | 4089 | 1.0770 | 0.5312 | | 1.4669 | 4.0 | 5453 | 1.0744 | 0.5496 | | 1.3781 | 5.0 | 6816 | 1.0245 | 0.5599 | | 1.3852 | 6.0 | 8179 | 1.0645 | 0.5402 | | 1.3407 | 7.0 | 9542 | 1.0011 | 0.5696 | | 1.3727 | 8.0 | 10906 | 0.9898 | 0.5801 | | 1.328 | 9.0 | 12269 | 0.9965 | 0.5738 | | 1.3374 | 10.0 | 13632 | 0.9722 | 0.5874 | | 1.3513 | 11.0 | 14995 | 0.9632 | 0.5873 | | 1.3728 | 12.0 | 16359 | 0.9818 | 0.5802 | | 1.3289 | 13.0 | 17722 | 0.9845 | 0.5729 | | 1.3219 | 14.0 | 19085 | 0.9633 | 0.5881 | | 1.2893 | 15.0 | 20448 | 0.9312 | 0.6004 | | 1.3088 | 16.0 | 21812 | 0.9537 | 0.5903 | | 1.3252 | 17.0 | 23175 | 0.9432 | 0.5986 | | 1.3424 | 18.0 | 24538 | 0.9291 | 0.5979 | | 1.3077 | 19.0 | 25901 | 0.9245 | 0.6020 | | 1.2466 | 20.0 | 27265 | 0.9304 | 0.6039 | | 1.2767 | 21.0 | 28628 | 0.9122 | 0.6099 | | 1.2553 | 22.0 | 29991 | 0.9312 | 0.6005 | | 1.2698 | 23.0 | 31354 | 0.9137 | 0.6092 | | 1.2591 | 24.0 | 32718 | 0.9113 | 0.6134 | | 1.277 | 25.0 | 34081 | 0.9095 | 0.6142 | | 1.2742 | 26.0 | 35444 | 0.9227 | 0.6100 | | 1.222 | 27.0 | 36807 | 0.9090 | 0.6147 | | 1.2368 | 28.0 | 38171 | 0.9020 | 0.6172 | | 1.198 | 29.0 | 39534 | 0.9071 | 0.6157 | | 1.2076 | 30.0 | 40897 | 0.9031 | 0.6214 | | 1.212 | 31.0 | 42260 | 0.9136 | 0.6175 | | 1.2105 | 32.0 | 43624 | 0.9170 | 0.6151 | | 1.2687 | 33.0 | 44987 | 0.9047 | 0.6186 | | 1.2038 | 34.0 | 46350 | 0.9061 | 0.6190 | | 1.1957 | 35.0 | 47713 | 0.9052 | 0.6255 | | 1.1962 | 36.0 | 49077 | 0.9057 | 0.6210 | | 1.1866 | 37.0 | 50440 | 0.9105 | 0.6227 | | 1.2545 | 38.0 | 51803 | 0.9173 | 0.6206 | | 1.1642 | 39.0 | 53166 | 0.9120 | 0.6239 | | 1.1711 | 40.0 | 54530 | 0.9235 | 0.6177 | | 1.2339 | 41.0 | 55893 | 0.9295 | 0.6143 | | 1.1132 | 42.0 | 57256 | 0.9143 | 0.6234 | | 1.1977 | 43.0 | 58619 | 0.9163 | 0.6256 | | 1.1617 | 44.0 | 59983 | 0.9246 | 0.6233 | | 1.1357 | 45.0 | 61346 | 0.9196 | 0.6255 | | 1.1362 | 46.0 | 62709 | 0.9221 | 0.6259 | | 1.1472 | 47.0 | 64072 | 0.9206 | 0.6263 | | 1.184 | 48.0 | 65436 | 0.9282 | 0.6256 | | 1.1096 | 49.0 | 66799 | 0.9252 | 0.6269 | | 1.1425 | 49.99 | 68150 | 0.9268 | 0.6269 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.0.1+cu117 - Datasets 2.17.0 - Tokenizers 0.15.2
fazito25/Torch-PixelCopter-v0
fazito25
2024-02-20T17:22:07Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-02-20T17:22:03Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Torch-PixelCopter-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 25.70 +/- 25.27 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
muhdshammas612/my-cat
muhdshammas612
2024-02-20T17:10:32Z
0
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-02-20T17:03:11Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### my-cat Dreambooth model trained by muhdshammas612 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: AEC-730221205021 Sample pictures of this concept: ![0](https://huggingface.co/muhdshammas612/my-cat/resolve/main/sample_images/cat_2_(1).jpg) ![1](https://huggingface.co/muhdshammas612/my-cat/resolve/main/sample_images/cat_1_(1).jpeg) ![2](https://huggingface.co/muhdshammas612/my-cat/resolve/main/sample_images/cat_3_(1).jpg) ![3](https://huggingface.co/muhdshammas612/my-cat/resolve/main/sample_images/cat_4_(1).jpg)
amalthenammackal/my-cat
amalthenammackal
2024-02-20T17:09:46Z
2
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-02-20T17:05:36Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### my-cat Dreambooth model trained by amalthenammackal following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: AEC-730221104006 Sample pictures of this concept: ![0](https://huggingface.co/amalthenammackal/my-cat/resolve/main/sample_images/cat3_(1).jpg) ![1](https://huggingface.co/amalthenammackal/my-cat/resolve/main/sample_images/cat4_(1).jpg) ![2](https://huggingface.co/amalthenammackal/my-cat/resolve/main/sample_images/cat2_(1).jpeg) ![3](https://huggingface.co/amalthenammackal/my-cat/resolve/main/sample_images/cat5_(1).jpg)
Josephgflowers/Cinder-Phi-2-Test-1
Josephgflowers
2024-02-20T17:07:01Z
170
0
transformers
[ "transformers", "safetensors", "gguf", "phi", "text-generation", "custom_code", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-19T19:28:25Z
--- license: mit widget: - text: > <|system|> You are a helpful assistant</s> <|user|> Can you explain to me how quantum computing works?</s> <|assistant|> --- Quick update, 2/20/23, testing is going great. I am really enjoying this version of Cinder. More information coming. Training data similar to openhermes2.5 with some added math, STEM, and reasoning mostly from OpenOrca. As well as Cinder character specific data. Model Overview Cinder is an AI chatbot tailored for engaging users in scientific and educational conversations, offering companionship, and sparking imaginative exploration. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6328952f798f8d122ce62a44/obCyZSvfUefEWrOXaeB3o.png)
shahdaboelkassem/ZiadTTS
shahdaboelkassem
2024-02-20T17:02:47Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "dataset:common_voice_11_0", "base_model:MBZUAI/speecht5_tts_clartts_ar", "base_model:finetune:MBZUAI/speecht5_tts_clartts_ar", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2024-02-19T23:05:19Z
--- license: mit base_model: MBZUAI/speecht5_tts_clartts_ar tags: - generated_from_trainer datasets: - common_voice_11_0 model-index: - name: ZiadTTS results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ZiadTTS This model is a fine-tuned version of [MBZUAI/speecht5_tts_clartts_ar](https://huggingface.co/MBZUAI/speecht5_tts_clartts_ar) on the common_voice_11_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.6477 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.9355 | 1.45 | 1000 | 0.8428 | | 0.7083 | 2.9 | 2000 | 0.6772 | | 0.6844 | 4.36 | 3000 | 0.6527 | | 0.6732 | 5.81 | 4000 | 0.6477 | ### Framework versions - Transformers 4.38.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
cjp2/my-pet-parrot-xyg
cjp2
2024-02-20T17:01:22Z
0
1
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-02-20T16:57:30Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-parrot-xyg Dreambooth model trained by cjp2 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: 814721104038 Sample pictures of this concept: ![0](https://huggingface.co/cjp2/my-pet-parrot-xyg/resolve/main/sample_images/xyg_bird_waliking_on_beach.png)
gayanin/bart-with-vocab-noise-data
gayanin
2024-02-20T16:56:49Z
6
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-base", "base_model:finetune:facebook/bart-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-20T15:34:28Z
--- license: apache-2.0 base_model: facebook/bart-base tags: - generated_from_trainer model-index: - name: bart-with-noise-data results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-with-noise-data This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2749 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5397 | 0.87 | 500 | 0.3145 | | 0.2586 | 1.73 | 1000 | 0.2909 | | 0.2764 | 2.6 | 1500 | 0.2749 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.2+cu121 - Datasets 2.17.0 - Tokenizers 0.15.1
danrrr/ppo-LunarLander-v2
danrrr
2024-02-20T16:55:26Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-20T16:55:03Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 236.38 +/- 18.31 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
itzkani/my-pet-rabbit
itzkani
2024-02-20T16:53:43Z
0
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-02-20T16:50:00Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Rabbit Dreambooth model trained by itzkani following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: 20BAD008 Sample pictures of this concept: ![0](https://huggingface.co/itzkani/my-pet-rabbit/resolve/main/sample_images/kmv_(2).jpg)
chosenone80/bert-ner-test-1
chosenone80
2024-02-20T16:47:24Z
57
0
transformers
[ "transformers", "tf", "bert", "token-classification", "generated_from_keras_callback", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-02-20T16:42:30Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_keras_callback model-index: - name: chosenone80/bert-ner-test-1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # chosenone80/bert-ner-test-1 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1319 - Validation Loss: 0.0479 - Train Precision: 0.9163 - Train Recall: 0.9298 - Train F1: 0.9230 - Train Accuracy: 0.9874 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 877, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:| | 0.1319 | 0.0479 | 0.9163 | 0.9298 | 0.9230 | 0.9874 | 0 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.15.0 - Datasets 2.17.1 - Tokenizers 0.15.2
Rashmi-421-Singh/red-cartoon-xzg
Rashmi-421-Singh
2024-02-20T16:38:26Z
0
1
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-02-20T16:34:01Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### red-cartoon-xzg Dreambooth model trained by Rashmi-421-Singh following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: 22/CSE/06 Sample pictures of this concept: ![0](https://huggingface.co/Rashmi-421-Singh/red-cartoon-xzg/resolve/main/sample_images/xzg_(1).jpg) ![1](https://huggingface.co/Rashmi-421-Singh/red-cartoon-xzg/resolve/main/sample_images/xzg_(4).jpg) ![2](https://huggingface.co/Rashmi-421-Singh/red-cartoon-xzg/resolve/main/sample_images/xzg_(3).jpg) ![3](https://huggingface.co/Rashmi-421-Singh/red-cartoon-xzg/resolve/main/sample_images/xzg_(2).jpg)
vinidiol/lora-llama7b-ptbr-80k
vinidiol
2024-02-20T16:37:05Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:NousResearch/Llama-2-7b-chat-hf", "base_model:adapter:NousResearch/Llama-2-7b-chat-hf", "region:us" ]
null
2024-02-20T16:33:27Z
--- library_name: peft base_model: NousResearch/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
Sanjxzg/my-pet-dog-xzg
Sanjxzg
2024-02-20T16:36:38Z
0
0
null
[ "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-02-20T16:35:45Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Dog-XZG Dreambooth model trained by Sanjxzg following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: 22CSE105 Sample pictures of this concept:
Palempoo/Jovenqueesunprojugandogtaycalldutyqueesseguroeinteligente
Palempoo
2024-02-20T16:33:45Z
0
0
adapter-transformers
[ "adapter-transformers", "chemistry", "es", "dataset:teknium/OpenHermes-2.5", "license:apache-2.0", "region:us" ]
null
2024-02-20T16:29:10Z
--- license: apache-2.0 datasets: - teknium/OpenHermes-2.5 language: - es metrics: - accuracy library_name: adapter-transformers tags: - chemistry ---
KevStrider/CartPole-v1
KevStrider
2024-02-20T16:19:28Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-02-20T16:19:19Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
brsiddharth23/ppo-LunarLander-v2
brsiddharth23
2024-02-20T16:09:11Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-20T16:05:45Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 270.58 +/- 20.67 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
CatBarks/GPT2ES_ClassWeighted1_4bce_tokenizer
CatBarks
2024-02-20T16:07:00Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-20T16:06:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
stillerman/instruct-aurora-alpaca
stillerman
2024-02-20T16:06:07Z
1
0
peft
[ "peft", "safetensors", "gpt_bigcode", "generated_from_trainer", "base_model:aurora-m/aurora-m-biden-harris-redteamed", "base_model:adapter:aurora-m/aurora-m-biden-harris-redteamed", "license:bigcode-openrail-m", "region:us" ]
null
2024-02-09T22:14:41Z
--- license: bigcode-openrail-m library_name: peft tags: - generated_from_trainer base_model: aurora-m/aurora-m-v0.1 model-index: - name: lora-out results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: aurora-m/aurora-m-v0.1 # this can be swapped for mdel model when the model is released model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer is_llama_derived_model: false load_in_8bit: false # when this is true inference quality is terrible load_in_4bit: false strict: false datasets: - path: tatsu-lab/alpaca # change this to where your dataset is type: alpaca # change this to 'alpaca' if you are using alpaca formatting lora_modules_to_save: - embed_tokens - lm_head dataset_prepared_path: val_set_size: 0.05 output_dir: ./lora-out sequence_len: 4096 # this can be tweaked for efficiency sample_packing: true pad_to_sequence_len: true adapter: lora lora_model_dir: lora_r: 32 lora_alpha: 16 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: wandb_project: aurora-instruct-alpaca # give this a name wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 2 # this can be tweaked for efficiency micro_batch_size: 1 # this can be tweaked for efficiency num_epochs: 1 # this can be experimented with optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.0002 train_on_inputs: true group_by_length: false bf16: true fp16: false tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: false # when this is true, inference quality is terrible s2_attention: warmup_steps: 10 # this can be tweaked for efficiency evals_per_epoch: 10 # this can be tweaked for efficiency eval_table_size: eval_table_max_new_tokens: 128 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: pad_token: "<|endoftext|>" eos_token: "<|endoftext|>" ``` </details><br> # lora-out This model is a fine-tuned version of [aurora-m/aurora-m-v0.1](https://huggingface.co/aurora-m/aurora-m-v0.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9600 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.9777 | 0.0 | 1 | 3.8904 | | 1.228 | 0.1 | 73 | 1.1761 | | 1.2383 | 0.2 | 146 | 1.0635 | | 0.9985 | 0.3 | 219 | 1.0268 | | 1.0444 | 0.4 | 292 | 1.0058 | | 0.9859 | 0.5 | 365 | 0.9904 | | 0.9736 | 0.6 | 438 | 0.9759 | | 1.0146 | 0.7 | 511 | 0.9655 | | 1.0007 | 0.8 | 584 | 0.9610 | | 0.9943 | 0.9 | 657 | 0.9600 | ### Framework versions - PEFT 0.8.2 - Transformers 4.38.0.dev0 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
lawful-good-project/llama-2-7b-ipc
lawful-good-project
2024-02-20T15:51:32Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "legal", "ru", "en", "dataset:lawful-good-project/ipc-inst-2k", "license:gpl-3.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-25T11:13:02Z
--- license: gpl-3.0 datasets: - lawful-good-project/ipc-inst-2k language: - ru - en tags: - legal --- Эта модель обучена на датасете с инструкциями https://huggingface.co/datasets/lawful-good-project/ipc-inst-2k в бесплатной версии google colab по схеме этого эксперимента: https://www.datacamp.com/tutorial/fine-tuning-llama-2
stompinghoss/ppo-LunarLander-v2
stompinghoss
2024-02-20T15:44:08Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-11-26T16:48:42Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 283.98 +/- 17.88 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
YohanBel/bruh
YohanBel
2024-02-20T15:42:48Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-02-20T15:42:47Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: bruh results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="YohanBel/bruh", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
YohanBel/q-FrozenLake-v1-4x4-noSlippery
YohanBel
2024-02-20T15:35:16Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-02-20T15:35:14Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="YohanBel/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
chathuranga-jayanath/codet5-small-v31
chathuranga-jayanath
2024-02-20T15:20:45Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:Salesforce/codet5-small", "base_model:finetune:Salesforce/codet5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-20T14:22:39Z
--- license: apache-2.0 base_model: Salesforce/codet5-small tags: - generated_from_trainer model-index: - name: codet5-small-v31 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codet5-small-v31 This model is a fine-tuned version of [Salesforce/codet5-small](https://huggingface.co/Salesforce/codet5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2023 - Bleu Score: 0.0013 - Gen Len: 14.2125 ## Model description trained on: - dataset: [BUG]...<extra_id_0>...[CONTEXT]... ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 30 - eval_batch_size: 30 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu Score | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:----------:|:-------:| | 0.3058 | 1.0 | 1518 | 0.2471 | 0.0013 | 14.0882 | | 0.2444 | 2.0 | 3036 | 0.2102 | 0.0013 | 14.218 | | 0.2223 | 3.0 | 4554 | 0.2023 | 0.0013 | 14.2125 | ### Framework versions - Transformers 4.38.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
Palistha/T5-small-finetuned
Palistha
2024-02-20T15:11:17Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:Palistha/finetuned-T5", "base_model:finetune:Palistha/finetuned-T5", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-20T14:59:17Z
--- base_model: Palistha/finetuned-T5 tags: - generated_from_trainer model-index: - name: T5-small-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # T5-small-finetuned This model is a fine-tuned version of [Palistha/finetuned-T5](https://huggingface.co/Palistha/finetuned-T5) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50.0 ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
olonok/flan-t5-base-samsum
olonok
2024-02-20T15:07:43Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:google/flan-t5-base", "base_model:finetune:google/flan-t5-base", "license:apache-2.0", "region:us" ]
null
2024-02-20T15:07:41Z
--- license: apache-2.0 base_model: google/flan-t5-base tags: - generated_from_trainer metrics: - rouge model-index: - name: flan-t5-base-samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-base-samsum This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan - Rouge1: 46.2704 - Rouge2: 22.2776 - Rougel: 38.7157 - Rougelsum: 42.2861 - Gen Len: 16.7729 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 0.0 | 1.0 | 1842 | nan | 46.2704 | 22.2776 | 38.7157 | 42.2861 | 16.7729 | | 0.0 | 2.0 | 3684 | nan | 46.2704 | 22.2776 | 38.7157 | 42.2861 | 16.7729 | | 0.0 | 3.0 | 5526 | nan | 46.2704 | 22.2776 | 38.7157 | 42.2861 | 16.7729 | | 0.0 | 4.0 | 7368 | nan | 46.2704 | 22.2776 | 38.7157 | 42.2861 | 16.7729 | | 0.0 | 5.0 | 9210 | nan | 46.2704 | 22.2776 | 38.7157 | 42.2861 | 16.7729 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
adityarra07/whisper-medium-train_noise2
adityarra07
2024-02-20T14:56:21Z
3
0
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-medium", "base_model:finetune:openai/whisper-medium", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-02-20T00:03:41Z
--- license: apache-2.0 base_model: openai/whisper-medium tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-medium-train_noise2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-medium-train_noise2 This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1641 - Wer: 7.3711 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2575 | 1.0 | 1385 | 0.1408 | 7.9993 | | 0.0824 | 2.0 | 2770 | 0.1419 | 7.4266 | | 0.0356 | 3.0 | 4155 | 0.1427 | 7.2788 | | 0.0131 | 4.0 | 5540 | 0.1548 | 7.3157 | | 0.0039 | 5.0 | 6925 | 0.1588 | 7.1864 | | 0.0011 | 6.0 | 8310 | 0.1641 | 7.3711 | ### Framework versions - Transformers 4.33.1 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.13.3
moussaKam/mbarthez
moussaKam
2024-02-20T14:35:26Z
299
5
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "summarization", "fill-mask", "fr", "arxiv:2010.12321", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- tags: - summarization language: - fr license: apache-2.0 pipeline_tag: "fill-mask" --- A french sequence to sequence pretrained model based on [BART](https://huggingface.co/facebook/bart-large). <br> BARThez is pretrained by learning to reconstruct a corrupted input sentence. A corpus of 66GB of french raw text is used to carry out the pretraining. <br> Unlike already existing BERT-based French language models such as CamemBERT and FlauBERT, BARThez is particularly well-suited for generative tasks (such as abstractive summarization), since not only its encoder but also its decoder is pretrained. In addition to BARThez that is pretrained from scratch, we continue the pretraining of a multilingual BART [mBART](https://huggingface.co/facebook/mbart-large-cc25) which boosted its performance in both discriminative and generative tasks. We call the french adapted version [mBARThez](https://huggingface.co/moussaKam/mbarthez). | Model | Architecture | #layers | #params | | ------------- |:-------------:| :-----:|:-----:| | [BARThez](https://huggingface.co/moussaKam/barthez) | BASE | 12 | 165M | | [mBARThez](https://huggingface.co/moussaKam/mbarthez) | LARGE | 24 | 458M | paper: https://arxiv.org/abs/2010.12321 \ github: https://github.com/moussaKam/BARThez ``` @article{eddine2020barthez, title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model}, author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis}, journal={arXiv preprint arXiv:2010.12321}, year={2020} } ```
Priyanka-Balivada/electra-5-epoch-sentiment
Priyanka-Balivada
2024-02-20T14:32:28Z
7
0
transformers
[ "transformers", "pytorch", "electra", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "base_model:google/electra-small-discriminator", "base_model:finetune:google/electra-small-discriminator", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-10-29T10:22:52Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - accuracy - precision - recall base_model: google/electra-small-discriminator model-index: - name: electra-5-epoch-sentiment results: - task: type: text-classification name: Text Classification dataset: name: tweet_eval type: tweet_eval config: sentiment split: test args: sentiment metrics: - type: accuracy value: 0.6893520026050146 name: Accuracy - type: precision value: 0.6913776305729754 name: Precision - type: recall value: 0.6893520026050146 name: Recall --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> TOKENIZER & TRAINER CORRUPTED # electra-5-epoch-sentiment This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.7949 - Accuracy: 0.6894 - Precision: 0.6914 - Recall: 0.6894 - Micro-avg-recall: 0.6894 - Micro-avg-precision: 0.6894 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | Micro-avg-recall | Micro-avg-precision | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:----------------:|:-------------------:| | 0.5949 | 1.0 | 2851 | 0.6963 | 0.6926 | 0.6943 | 0.6926 | 0.6926 | 0.6926 | | 0.6502 | 2.0 | 5702 | 0.7348 | 0.6911 | 0.6929 | 0.6911 | 0.6911 | 0.6911 | | 0.556 | 3.0 | 8553 | 0.7322 | 0.6943 | 0.6952 | 0.6943 | 0.6943 | 0.6943 | | 0.4561 | 4.0 | 11404 | 0.7601 | 0.6895 | 0.6916 | 0.6895 | 0.6895 | 0.6895 | | 0.471 | 5.0 | 14255 | 0.7949 | 0.6894 | 0.6914 | 0.6894 | 0.6894 | 0.6894 | ### Framework versions - Transformers 4.33.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
Shritama/t5-small-finetuned-nl2sql
Shritama
2024-02-20T14:26:19Z
25
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-19T16:22:29Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer model-index: - name: t5-small-finetuned-nl2sql results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-nl2sql This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 356 | 1.1085 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
Dobri2227/Peptide
Dobri2227
2024-02-20T14:19:25Z
0
0
null
[ "region:us" ]
null
2024-02-20T14:18:58Z
Generate a hypothetical fish peptide sequence with the added feature of having a scent reminiscent of a rose. Incorporate amino acids and modifications that could symbolize the combination of fish and rose fragrance. Be creative in designing this fictional peptide.
Olivia-umich/dqn-SpaceInvadersNoFrameskip-v4
Olivia-umich
2024-02-20T14:00:09Z
7
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-20T13:59:41Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 394.50 +/- 259.98 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Olivia-umich -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Olivia-umich -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Olivia-umich ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
danielhanchen/lora_model4_21022024
danielhanchen
2024-02-20T13:59:48Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-20T13:59:33Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
danielhanchen/lora_new2_20022024
danielhanchen
2024-02-20T13:59:27Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-20T13:59:09Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AI-Sweden-Models/gpt-sw3-40b
AI-Sweden-Models
2024-02-20T13:58:22Z
1,838
10
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "da", "sv", "no", "en", "is", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-02-22T15:10:59Z
--- license: other language: - da - sv - 'no' - en - is --- # Model description [AI Sweden](https://huggingface.co/AI-Sweden-Models/) **Base models** [GPT-Sw3 126M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m/) | [GPT-Sw3 356M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m/) | [GPT-Sw3 1.3B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b/) [GPT-Sw3 6.7B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b/) | [GPT-Sw3 6.7B v2](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2/) | [GPT-Sw3 20B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b/) [GPT-Sw3 40B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-40b/) **Instruct models** [GPT-Sw3 126M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m-instruct/) | [GPT-Sw3 356M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m-instruct/) | [GPT-Sw3 1.3B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b-instruct/) [GPT-Sw3 6.7B v2 Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct/) | [GPT-Sw3 20B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct/) **Quantized models** [GPT-Sw3 6.7B v2 Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct-4bit-gptq) | [GPT-Sw3 20B Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct-4bit-gptq) GPT-SW3 is a collection of large decoder-only pretrained transformer language models that were developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language. GPT-SW3 has been trained on a dataset containing 320B tokens in Swedish, Norwegian, Danish, Icelandic, English, and programming code. The model was pretrained using a causal language modeling (CLM) objective utilizing the NeMo Megatron GPT implementation. # Intended use GPT-SW3 is an autoregressive large language model that is capable of generating coherent text in 5 different languages, and 4 programming languages. GPT-SW3 can also be instructed to perform text tasks that it has not been explicitly trained for, by casting them as text generation tasks. # Limitations Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of for example bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: overrepresent some viewpoints and underrepresent others, contain stereotypes, generate hateful, abusive, violent, discriminatory or prejudicial language. The model may make errors, including producing incorrect information as if it were factual, it may generate irrelevant or repetitive outputs, and content that may not be appropriate for all settings, including sexual content. # How to use To be able to access the model from Python, since this is a private repository, you have to log in with your access token. This can be done with `huggingface-cli login`, see [HuggingFace Quick Start Guide](https://huggingface.co/docs/huggingface_hub/quick-start#login) for more information. The following code snippet loads our tokenizer & model, and uses the GPU if available. ```python import torch from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM # Initialize Variables model_name = "AI-Sweden-Models/gpt-sw3-40b" device = "cuda:0" if torch.cuda.is_available() else "cpu" prompt = "Träd är fina för att" # Initialize Tokenizer & Model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) model.eval() model.to(device) ``` Generating text using the `generate` method is done as follows: ```python input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device) generated_token_ids = model.generate( inputs=input_ids, max_new_tokens=100, do_sample=True, temperature=0.6, top_p=1, )[0] generated_text = tokenizer.decode(generated_token_ids) ``` A convenient alternative to the `generate` method is the HuggingFace pipeline, which handles most of the work for you: ```python generator = pipeline('text-generation', tokenizer=tokenizer, model=model, device=device) generated = generator(prompt, max_new_tokens=100, do_sample=True, temperature=0.6, top_p=1)[0]["generated_text"] ``` # Compliance The release of GPT-SW3 consists of model weights, a configuration file, a tokenizer file and a vocabulary file. None of these files contain any personally identifiable information (PII) or any copyrighted material. # GPT-SW3 Model Card Following Mitchell et al. (2018), we provide a model card for GPT-SW3. # Model Details - Person or organization developing model: GPT-SW3 was developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language. - Model date: GPT-SW3 date of release 2022-12-20 - Model version: This is the second generation of GPT-SW3. - Model type: GPT-SW3 is a large decoder-only transformer language model. - Information about training algorithms, parameters, fairness constraints or other applied approaches, and features: GPT-SW3 was trained with the NeMo Megatron GPT implementation. - Paper or other resource for more information: N/A. - License: [LICENSE](https://huggingface.co/AI-Sweden-Models/gpt-sw3-40b/blob/main/LICENSE). - Where to send questions or comments about the model: [email protected] # Intended Use - Primary intended uses: We pre-release GPT-SW3 for research and evaluation of the capabilities of Large Language Models for the Nordic languages. This is an important step in the process of knowledge building for LLMs, validating the model and collecting feedback on both what works well and what does not. - Primary intended users: Organizations and individuals in the Nordic NLP ecosystem who can contribute to the validation and testing of the models and provide feedback to the community. - Out-of-scope use cases: See the modified RAIL license. # Data, Limitations, and Recommendations - Data selection for training: Training data for GPT-SW3 was selected based on a combination of breadth and availability. See our Datasheet for more detailed information on the data used to train our model. - Data selection for evaluation: N/A - Limitations: Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. In general, GPT-SW3 is not immune from the plethora of issues that plague modern large language models. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: Overrepresent some viewpoints and underrepresent others. Contain stereotypes. Generate: Hateful, abusive, or violent language. Discriminatory or prejudicial language. Content that may not be appropriate for all settings, including sexual content. Make errors, including producing incorrect information as if it were factual. Generate irrelevant or repetitive outputs. - Recommendations for future work: Indirect users should be made aware when the content they're working with is created by the LLM. Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary. Models pretrained with the LLM should include an updated Model Card. Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments. - We hope that the release of GPT-SW3, as well as information around our model training process, will increase open science around both large language models in specific and natural language processing and deep learning in general. # GPT-SW3 Datasheet - We follow the recommendations of Gebru et al. (2021) and provide a datasheet for the dataset used to train GPT-SW3. # Motivation - For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description. Pre-training of Large Language Models (LLM), such as GPT-3 (T. B. Brown et al., 2020), Gopher (J. W. Rae et al., 2022), BLOOM (T. L. Scao et al., 2022), etc. require 100s or even 1000s GBs of text data, with recent studies (Chinchilla: J. Hoffmann et al., 2022) suggesting that the scale of the training data is even more important than previously imagined. Therefore, in order to train Swedish LLMs, we needed a large scale Swedish dataset of high quality. Since no such datasets existed before this initiative, we collected data in the Nordic and English languages. - Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? The Strategic Initiative Natural Language Understanding at AI Sweden has established a new research environment in which collaboration is key. The core team working on the creation of the dataset is the NLU research group at AI Sweden. This group consists of researchers and developers from AI Sweden (Lindholmen Science Park AB) and RISE. - Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number. The Swedish Innovation Agency (Vinnova) has funded this work across several different grants, including 2019-02996 and 2022-00949. - Any other comments? No. # Composition - What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description. The instances are textual documents categorized by language and document type. The dataset is a filtered and deduplicated collection that includes the following sources: - Books - Litteraturbanken (https://litteraturbanken.se/) - The Pile - Articles - Diva (https://www.diva-portal.org/) - The Pile: PubMed - The Pile: ArXiv - Code - Code Parrot: Github code (https://huggingface.co/datasets/codeparrot/github-code) - Conversational - Familjeliv (https://www.familjeliv.se/) - Flashback (https://flashback.se/) - Datasets collected through Parlai (see Appendix in data paper for complete list) (https://github.com/facebookresearch/ParlAI) - Pushshift.io Reddit dataset, developed in Baumgartner et al. (2020) and processed in Roller et al. (2021) - Math - English Math dataset generated with code from DeepMind (D. Saxton et al., 2019) - Swedish Math dataset, generated as above with manually translated templates - Miscellaneous - Summarization data (https://www.ida.liu.se/~arnjo82/papers/clarin-21-julius.pdf) - OPUS, the open parallel corpus (https://opus.nlpl.eu/) - Movie scripts (https://github.com/Aveek-Saha/Movie-Script-Database) - Natural Instructions (https://github.com/allenai/natural-instructions) - P3 (Public Pool of Prompts), (https://huggingface.co/datasets/bigscience/P3) - The Norwegian Colossal Corpus (https://huggingface.co/datasets/NbAiLab/NCC) - Danish Gigaword (https://gigaword.dk/) - Icelandic Gigaword (https://clarin.is/en/resources/gigaword/) - The Pile: Stack Exchange - Web Common Crawl - Web data from the project LES (Linguistic Explorations of Societies, https://les.gu.se). - Multilingual C4 (MC4), prepared by AllenAI from C4 (C. Raffel et al., 2019) - Open Super-large Crawled Aggregated coRpus (OSCAR) (P. O. Suarez, 2019) - The Pile: Open Web Text - Web Sources - Various public Swedish website scrapes (see Appendix in data paper) - Familjeliv Articles - Public Swedish Job Ads from JobTech/Arbetsförmedlingen - Wikipedia - Official Wikipedia dumps - How many instances are there in total (of each type, if appropriate)? The training data consists of 1.1TB UTF-8 encoded text, containing 660M documents with a total of 320B tokens. - Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable). The subset of our dataset that comes from multilingual Common Crawl datasets (MC4, Oscar), are filtered by language to only include Swedish, Norwegian, Danish, and Icelandic. From The Pile, we included only the parts that typically are of highest textual quality or complemented the rest of our dataset with sources we otherwise lacked (e.g. books). The remainder of the dataset was collected from the above sources. - What data does each instance consist of? “Raw” data (e.g., unprocessed text or images) or features? In either case, please provide a description. Each instance consists of raw text data. - Is there a label or target associated with each instance? If so, please provide a description. No. - Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text. No. - Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)? If so, please describe how these relationships are made explicit. There are no explicit relationships between individual instances. - Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them. There are no explicit splits recommended for this dataset. When pre-training the model, a random split for train, dev, test is set to 99.99%, 0.08%, 0.02% respectively, and is sampled proportionally to each subset’s weight and size. The weight of each subset was manually decided beforehand. These decisions were made considering the data’s value, source, and language, to form a representative and balanced pre-training corpus. - Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description. The dataset is a collection of many sources, some of which naturally contain some overlap. Although we have performed deduplication, some overlap may still remain. Furthermore, there may be some noise remaining from artifacts originating in Common Crawl datasets, that have been missed by our data filtering process. Except for these, we are not aware of any errors, sources of noise, or redundancies. - Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? The dataset is self-contained. - Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why. The dataset contains subsets of public Common Crawl, Reddit, Familjeliv and Flashback. These could contain sentences that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety. - Does the dataset relate to people? If not, you may skip the remaining questions in this section. Some documents of this data relate to people, such as news articles, Wikipedia descriptions, etc. - Does the dataset identify any subpopulations (e.g., by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset. No, the dataset does not explicitly include subpopulation identification. - Any other comments? No. # Collection Process - How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part-of-speech tags, model-based guesses for age or language)? If data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how. N/A. The dataset is a union of publicly available datasets and sources. - What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)? How were these mechanisms or procedures validated? The data was downloaded from the internet. - If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? Please see previous answers for how parts of the dataset were selected. - Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? This data is mined, filtered and sampled by machines. - Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created. The dataset was collected during the period June 2021 to June 2022. The creation of the collected sources varies, with e.g. Common Crawl data that have been continuously collected over 12 years. - Does the dataset relate to people? If not, you may skip the remainder of the questions in this section. Yes. The texts have been produced by people. Any personal information potentially present in publicly available data sources and thus in the created dataset is of no interest to the collection and use of the dataset. - Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation. Yes. - Any other comments? No. - Preprocessing/cleaning/labeling - Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? If so, please provide a description. If not, you may skip the remainder of the questions in this section. The dataset was filtered and re-formatted on a document-level using standard procedures, inspired by the work in The BigScience ROOTS Corpus (H. Laurençon et al., 2022) and Gopher (J. W. Rae et al., 2022). This was done with the goal of achieving a consistent text format throughout the dataset, and to remove documents that did not meet our textual quality requirements (e.g. repetitiveness). Furthermore, the dataset was deduplicated to remedy the overlap between collected subsets using the MinHash algorithm, similar to the method used in GPT-3 and The Pile, and described in greater detail in “Deduplicating Training Data Makes Language Models Better” (K. Lee et al., 2021). - Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? If so, please provide a link or other access point to the “raw” data. The “raw” component datasets are publicly available in their respective locations. - Any other comments? No. # Uses - Has the dataset been used for any tasks already? If so, please provide a description. The dataset was used to pre-train the GPT-SW3 models. - Is there a repository that links to any or all papers or systems that use the dataset? If so, please provide a link or other access point. N/A. - What (other) tasks could the dataset be used for? The data can be used to pre-train language models, which are foundations for many current and future language tasks. - Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? For example, is there anything that a future user might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of service issues) or other undesirable harms (e.g., financial harms, legal risks) If so, please provide a description. Is there anything a future user could do to mitigate these undesirable harms? The dataset is probably quite representative of Swedish internet discourse in general, and of the Swedish public sector, but we know that this data does not necessarily reflect the entire Swedish population. - Are there tasks for which the dataset should not be used? If so, please provide a description. None that we are currently aware of. - Any other comments? No. # Distribution - Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? If so, please provide a description. No. - How will the dataset distributed (e.g., tarball on website, API, GitHub)? Does the dataset have a digital object identifier (DOI)? N/A. - When will the dataset be distributed? N/A. - Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions. N/A. - Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation. N/A. - Any other comments? No. # Maintenance - Who is supporting/hosting/maintaining the dataset? AI Sweden at Lindholmen Science Park AB. - How can the owner/curator/manager of the dataset be contacted (e.g., email address)? [email protected] - Is there an erratum? If so, please provide a link or other access point. N/A. - Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? If so, please describe how often, by whom, and how updates will be communicated to users (e.g., mailing list, GitHub)? Currently, there are no plans for updating the dataset. - If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)? If so, please describe these limits and explain how they will be enforced. Read the privacy policy for the NLU initiative at AI Sweden [here](https://www.ai.se/en/privacy-policy-nlu). - Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to users. N/A. - If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? If so, please provide a description. Will these contributions be validated/ verified? If so, please describe how. If not, why not? Is there a process for communicating/ distributing these contributions to other users? If so, please provide a description. Not at this time. - Any other comments? No.
beomi/OPEN-SOLAR-KO-10.7B
beomi
2024-02-20T13:49:00Z
199
57
transformers
[ "transformers", "safetensors", "llama", "text-generation", "solar", "mistral", "pytorch", "solar-ko", "ko", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-02T02:01:37Z
--- language: - ko - en pipeline_tag: text-generation inference: false tags: - solar - mistral - pytorch - solar-ko library_name: transformers license: apache-2.0 --- **Update Log** - 2024.01.08: Initial Test version Release of Solar-Ko # **Open-Solar-Ko** ⭐🇰🇷 Solar-Ko represents an advanced iteration of the upstage/SOLAR-10.7B-v1.0 model, featuring an expanded vocabulary and the inclusion of a Korean corpus for enhanced pretraining. Open-Solar-Ko exclusively utilizes publicly accessible Korean corpora, including sources such as [AI Hub](https://www.aihub.or.kr), [Modu Corpus, 모두의 말뭉치](https://corpus.korean.go.kr/), and [Korean Wikipedia](https://dumps.wikimedia.org/kowiki/). As training was conducted solely with publicly available corpora, this model is open for unrestricted use by everyone, adhering to the Apache2.0 open source License. ## Model Details **Model Developers:** Junbum Lee (Beomi) **Variations:** Solar-Ko is available with one parameter sizes — 10B with Continual Pretrained version. **Input:** The model accepts only text input. **Output:** The model produces text output exclusively. **Model Architecture:** SOLAR-KO-10.7B is an auto-regressive language model that leverages an optimized transformer architecture derived from Llama-2. | |Training Data|Parameters|Content Length|GQA|Tokens|Learning Rate| |---|---|---|---|---|---|---| |SOLAR-KO-10.7B|*A curated mix of Publicly Accessible Korean Corpora*|10.7B|4k|O|>15B*|5e<sup>-5</sup>| **Training Corpus** The model was trained using selected datasets from AIHub and Modu Corpus. Detailed information about the training datasets is available below: - AI Hub: [corpus/AI_HUB](./corpus/AI_HUB) - Only the `Training` segment of the data was used. - The `Validation` and `Test` segments were deliberately excluded. - Modu Corpus: [corpus/MODU_CORPUS](./corpus/MODU_CORPUS) The final JSONL dataset used to train this model is approximately 61GB in size. Total token count: Approximately 15 billion tokens (*using the expanded tokenizer. With the original SOLAR tokenizer, >60 billion tokens.) **Vocab Expansion** | Model Name | Vocabulary Size | Description | | --- | --- | --- | | Original Solar | 32000 | Sentencepiece BPE | | **Expanded SOLAR-KO-10.7B** | 46592 | Sentencepiece BPE. Added Korean vocab and merges | **Tokenizing "안녕하세요, 오늘은 날씨가 좋네요."** - SOLAR-10.7B: 26 tokens - SOLAR-KO-10.7b: 8 tokens | Model | Tokens | | --- | --- | | SOLAR-10.7B | `['▁', '안', '<0xEB>', '<0x85>', '<0x95>', '하', '세', '요', ',', '▁', '오', '<0xEB>', '<0x8A>', '<0x98>', '은', '▁', '날', '<0xEC>', '<0x94>', '<0xA8>', '가', '▁', '좋', '네', '요', '.']` | | SOLAR-KO-10.7B | `['▁안녕', '하세요', ',', '▁오늘은', '▁날', '씨가', '▁좋네요', '.']` | **Tokenizing "Meet 10.7B Solar: Elevating Performance with Upstage Depth UP Scaling!"** - SOLAR-10.7B: 22 tokens - SOLAR-KO-10.7b: 22 tokens | Model | Tokens | | --- | --- | | SOLAR-10.7B | `['▁Meet', '▁', '1', '0', '.', '7', 'B', '▁Solar', ':', '▁E', 'lev', 'ating', '▁Performance', '▁with', '▁Up', 'stage', '▁Dep', 'th', '▁UP', '▁Scal', 'ing', '!']` | | SOLAR-KO-10.7B | `['▁Meet', '▁', '1', '0', '.', '7', 'B', '▁Solar', ':', '▁E', 'lev', 'ating', '▁Performance', '▁with', '▁Up', 'stage', '▁Dep', 'th', '▁UP', '▁Scal', 'ing', '!']` | # LICENSE Apache 2.0 # **Model Benchmark** ## LM Eval Harness - Korean (polyglot branch) - Used EleutherAI's lm-evaluation-harness https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot | | 0 | 5 | 10 | 50 | |:---------------------------------|---------:|---------:|---------:|---------:| | kobest_boolq (macro_f1) | 0.853949 | 0.88098 | 0.898139 | 0.902354 | | kobest_copa (macro_f1) | 0.804531 | 0.826736 | 0.837656 | 0.860899 | | kobest_hellaswag (macro_f1) | 0.507174 | 0.500983 | 0.487287 | 0.512182 | | kobest_sentineg (macro_f1) | 0.3517 | 0.972291 | 0.977321 | 0.984884 | | kohatespeech (macro_f1) | 0.258111 | 0.403957 | 0.386808 | 0.462393 | | kohatespeech_apeach (macro_f1) | 0.337667 | 0.651697 | 0.705337 | 0.827757 | | kohatespeech_gen_bias (macro_f1) | 0.124535 | 0.503464 | 0.498501 | 0.443218 | | korunsmile (f1) | 0.3814 | 0.356939 | 0.369989 | 0.296193 | | nsmc (acc) | 0.5356 | 0.87162 | 0.88654 | 0.89632 | | pawsx_ko (acc) | 0.5435 | 0.5245 | 0.5315 | 0.5385 | ## Citation ``` @misc {solar_ko_junbum_2023, author = { {L. Junbum} }, title = { Solar-Ko-10.7b }, year = 2024, url = { https://huggingface.co/beomi/SOLAR-KO-10.7B }, publisher = { Hugging Face } } ``` ## Acknowledgements - Training support was provided by the [TPU Research Cloud](https://sites.research.google/trc/) program. - The training corpus includes data from [AI Hub](https://www.aihub.or.kr/), [Modu Corpus](https://corpus.korean.go.kr/), and [Korean Wikipedia](https://dumps.wikimedia.org/kowiki/).
JhonVanced/faster-whisper-large-v3-ja
JhonVanced
2024-02-20T13:37:33Z
57
1
ctranslate2
[ "ctranslate2", "audio", "automatic-speech-recognition", "en", "zh", "de", "es", "ru", "ko", "fr", "ja", "pt", "tr", "pl", "ca", "nl", "ar", "sv", "it", "id", "hi", "fi", "vi", "he", "uk", "el", "ms", "cs", "ro", "da", "hu", "ta", "no", "th", "ur", "hr", "bg", "lt", "la", "mi", "ml", "cy", "sk", "te", "fa", "lv", "bn", "sr", "az", "sl", "kn", "et", "mk", "br", "eu", "is", "hy", "ne", "mn", "bs", "kk", "sq", "sw", "gl", "mr", "pa", "si", "km", "sn", "yo", "so", "af", "oc", "ka", "be", "tg", "sd", "gu", "am", "yi", "lo", "uz", "fo", "ht", "ps", "tk", "nn", "mt", "sa", "lb", "my", "bo", "tl", "mg", "as", "tt", "haw", "ln", "ha", "ba", "jw", "su", "yue", "license:mit", "region:us" ]
automatic-speech-recognition
2024-02-14T12:20:05Z
--- language: - en - zh - de - es - ru - ko - fr - ja - pt - tr - pl - ca - nl - ar - sv - it - id - hi - fi - vi - he - uk - el - ms - cs - ro - da - hu - ta - 'no' - th - ur - hr - bg - lt - la - mi - ml - cy - sk - te - fa - lv - bn - sr - az - sl - kn - et - mk - br - eu - is - hy - ne - mn - bs - kk - sq - sw - gl - mr - pa - si - km - sn - yo - so - af - oc - ka - be - tg - sd - gu - am - yi - lo - uz - fo - ht - ps - tk - nn - mt - sa - lb - my - bo - tl - mg - as - tt - haw - ln - ha - ba - jw - su - yue tags: - audio - automatic-speech-recognition license: mit library_name: ctranslate2 --- Convert from: Watarungurunnn/whisper-large-v3-ja # Whisper large-v3 model for CTranslate2 This repository contains the conversion of [Watarungurunnn/whisper-large-v3-ja](https://huggingface.co/Watarungurunnn/whisper-large-v3-ja) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format. This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/systran/faster-whisper). ## Example ```python from faster_whisper import WhisperModel model = WhisperModel("large-v3") segments, info = model.transcribe("audio.mp3") for segment in segments: print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text)) ``` ## Conversion details The original model was converted with the following command: ``` ct2-transformers-converter --model Watarungurunnn/whisper-large-v3-ja --output_dir faster-whisper-large-v3-ja \ --copy_files tokenizer.json preprocessor_config.json --quantization float16 ``` Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html). ## More information **For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-large-v3).**
DhimanBose/FineTuned_BanglaBERT_for_Masked_Language_Model
DhimanBose
2024-02-20T13:17:17Z
7
1
transformers
[ "transformers", "safetensors", "bert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-02-20T13:13:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mlx-community/MiniCPM-2B-sft-bf16-llama-format-mlx
mlx-community
2024-02-20T13:15:50Z
17
3
mlx
[ "mlx", "llama", "MiniCPM", "ModelBest", "THUNLP", "en", "zh", "region:us" ]
null
2024-02-20T10:06:16Z
--- language: - en - zh tags: - MiniCPM - ModelBest - THUNLP - mlx --- # mlx-community/MiniCPM-2B-sft-bf16-llama-format-mlx This model was converted to MLX format from [`openbmb/MiniCPM-2B-sft-bf16-llama-format`](). Refer to the [original model card](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16-llama-format) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/MiniCPM-2B-sft-bf16-llama-format-mlx") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
linoyts/huggy_lora_unet_1_repeats
linoyts
2024-02-20T13:05:06Z
2
0
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-02-20T12:24:43Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: 'a TOK emoji dressed as yoda' output: url: "image_0.png" - text: 'a TOK emoji dressed as yoda' output: url: "image_1.png" - text: 'a TOK emoji dressed as yoda' output: url: "image_2.png" - text: 'a TOK emoji dressed as yoda' output: url: "image_3.png" base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a TOK emoji license: openrail++ --- # SDXL LoRA DreamBooth - linoyts/huggy_lora_unet_1_repeats <Gallery /> ## Model description ### These are linoyts/huggy_lora_unet_1_repeats LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`huggy_lora_unet_1_repeats.safetensors` here 💾](/linoyts/huggy_lora_unet_1_repeats/blob/main/huggy_lora_unet_1_repeats.safetensors)**. - Place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `<lora:huggy_lora_unet_1_repeats:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('linoyts/huggy_lora_unet_1_repeats', weight_name='pytorch_lora_weights.safetensors') image = pipeline('a TOK emoji dressed as yoda').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Trigger words You should use a TOK emoji to trigger the image generation. ## Details All [Files & versions](/linoyts/huggy_lora_unet_1_repeats/tree/main). The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
Meli101/krissbert-sentence-classifier
Meli101
2024-02-20T13:01:21Z
60
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "base_model:microsoft/BiomedNLP-KRISSBERT-PubMed-UMLS-EL", "base_model:finetune:microsoft/BiomedNLP-KRISSBERT-PubMed-UMLS-EL", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-20T12:47:42Z
--- license: mit base_model: microsoft/BiomedNLP-KRISSBERT-PubMed-UMLS-EL tags: - generated_from_keras_callback model-index: - name: Meli101/krissbert-sentence-classifier results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Meli101/krissbert-sentence-classifier This model is a fine-tuned version of [microsoft/BiomedNLP-KRISSBERT-PubMed-UMLS-EL](https://huggingface.co/microsoft/BiomedNLP-KRISSBERT-PubMed-UMLS-EL) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1489 - Validation Loss: 0.2873 - Train Precision: 0.9170 - Train Recall: 0.9151 - Train Accuracy: 0.9154 - Train F1: 0.9151 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1535, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train Accuracy | Train F1 | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------------:|:--------:|:-----:| | 0.5011 | 0.3717 | 0.8777 | 0.8711 | 0.8723 | 0.8724 | 0 | | 0.2382 | 0.3152 | 0.9012 | 0.8984 | 0.8991 | 0.8991 | 1 | | 0.1489 | 0.2873 | 0.9170 | 0.9151 | 0.9154 | 0.9151 | 2 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.15.0 - Datasets 2.17.1 - Tokenizers 0.15.2
lvcalucioli/phi2_
lvcalucioli
2024-02-20T13:00:55Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-02-20T11:12:31Z
--- license: mit library_name: peft tags: - trl - sft - generated_from_trainer base_model: microsoft/phi-2 model-index: - name: phi2_ results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi2_ This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 5 ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.38.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.16.1 - Tokenizers 0.15.2
Rishav007/Rishav
Rishav007
2024-02-20T12:59:08Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-02-20T12:12:17Z
--- license: mit library_name: peft tags: - trl - sft - generated_from_trainer base_model: microsoft/phi-2 model-index: - name: Rishav results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Rishav This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
Leul78/sfttrial1
Leul78
2024-02-20T12:42:03Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-20T12:37:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LarryAIDraw/dmastraea-nvwls-v1
LarryAIDraw
2024-02-20T12:40:31Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-02-20T12:35:42Z
--- license: creativeml-openrail-m --- https://civitai.com/models/313500/astraea-is-it-wrong-to-try-to-pick-up-girls-in-a-dungeon-lora
LarryAIDraw/niijima_sae-10
LarryAIDraw
2024-02-20T12:40:09Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-02-20T12:35:03Z
--- license: creativeml-openrail-m --- https://civitai.com/models/313701/niijima-sae-persona-5-lora-commission
LarryAIDraw/lady_nagant-10
LarryAIDraw
2024-02-20T12:39:51Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-02-20T12:34:22Z
--- license: creativeml-openrail-m --- https://civitai.com/models/313715/lady-nagant-my-hero-academia-lora-commission
LarryAIDraw/Char-HonkaiSR-Firefly-V1
LarryAIDraw
2024-02-20T12:39:41Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-02-20T12:34:02Z
--- license: creativeml-openrail-m --- https://civitai.com/models/314105/firefly-or-honkai-star-rail
LarryAIDraw/ryza-10
LarryAIDraw
2024-02-20T12:39:15Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-02-20T12:33:21Z
--- license: creativeml-openrail-m --- https://civitai.com/models/313748/reisalin-ryza-stout-atelier-ryza-lora-commission
LarryAIDraw/klaudia_valentz-10
LarryAIDraw
2024-02-20T12:39:06Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-02-20T12:33:00Z
--- license: creativeml-openrail-m --- https://civitai.com/models/313734/klaudia-valentz-atelier-ryza-lora-commission
LarryAIDraw/lila_decyrus-10
LarryAIDraw
2024-02-20T12:38:57Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-02-20T12:32:40Z
--- license: creativeml-openrail-m --- https://civitai.com/models/313755/lila-decyrus-atelier-ryza-lora-commission
Meli101/results
Meli101
2024-02-20T12:33:32Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:microsoft/BiomedNLP-KRISSBERT-PubMed-UMLS-EL", "base_model:finetune:microsoft/BiomedNLP-KRISSBERT-PubMed-UMLS-EL", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-20T12:24:47Z
--- license: mit base_model: microsoft/BiomedNLP-KRISSBERT-PubMed-UMLS-EL tags: - generated_from_trainer metrics: - precision - recall - accuracy - f1 model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [microsoft/BiomedNLP-KRISSBERT-PubMed-UMLS-EL](https://huggingface.co/microsoft/BiomedNLP-KRISSBERT-PubMed-UMLS-EL) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4228 - Precision: 0.9215 - Recall: 0.9209 - Accuracy: 0.9211 - F1: 0.9210 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:--------:|:------:| | No log | 1.0 | 308 | 0.3266 | 0.8847 | 0.8822 | 0.8820 | 0.8824 | | 0.4217 | 2.0 | 616 | 0.3034 | 0.9072 | 0.9066 | 0.9064 | 0.9065 | | 0.4217 | 3.0 | 924 | 0.3483 | 0.9171 | 0.9170 | 0.9170 | 0.9171 | | 0.163 | 4.0 | 1232 | 0.3952 | 0.9227 | 0.9227 | 0.9227 | 0.9226 | | 0.0722 | 5.0 | 1540 | 0.4228 | 0.9215 | 0.9209 | 0.9211 | 0.9210 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
Novocoders/Mistral-NeuralDPO-v0.5
Novocoders
2024-02-20T12:28:17Z
15
2
transformers
[ "transformers", "pytorch", "safetensors", "mistral", "text-generation", "generated_from_trainer", "dataset:NeuralNovel/Neural-DPO", "base_model:mistralai/Mistral-7B-v0.1", "base_model:finetune:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-19T21:49:12Z
--- license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 datasets: NeuralNovel/Neural-DPO tags: - generated_from_trainer model-index: - name: out results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: mistralai/Mistral-7B-v0.1 model_type: MistralForCausalLM tokenizer_type: LlamaTokenizer is_mistral_derived_model: true load_in_8bit: false load_in_4bit: false strict: false rl: dpo datasets: - path: NeuralNovel/Neural-DPO split: train type: chatml.intel format: "[INST] {instruction} [/INST]" no_input_format: "[INST] {instruction} [/INST]" dataset_prepared_path: val_set_size: 0.05 output_dir: ./out sequence_len: 8192 sample_packing: false pad_to_sequence_len: true eval_sample_packing: false wandb_project: wandb_entity: wandb_watch: wandb_name: Neural-DPO wandb_log_model: gradient_accumulation_steps: 4 micro_batch_size: 2 num_epochs: 12 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.000005 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 4 eval_table_size: eval_max_new_tokens: 128 saves_per_epoch: 0 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: bos_token: "<s>" eos_token: "</s>" unk_token: "<unk>" ``` </details><br> Creator: <a href="https://huggingface.co/NovoCode">NovoCode</a> Community Organization: <a href="https://huggingface.co/ConvexAI">ConvexAI</a> Discord: <a href="https://discord.gg/rJXGjmxqzS">Join us on Discord</a> ## Model description This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on [Neural-DPO](https://huggingface.co/datasets/NeuralNovel/Neural-DPO). This model should excel at question answering across a rich array of subjects across a wide range of domains such as literature, scientific research, and theoretical inquiries. ## ExLlamaV2 Quants ExLlamaV2 quants are available from [bartowski here](https://huggingface.co/bartowski/Mistral-NeuralDPO-v0.5-exl2) ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 1602 ### Framework versions - Transformers 4.38.0.dev0 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.0
Olivia-umich/q-taxi-v3
Olivia-umich
2024-02-20T12:26:23Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-02-20T12:26:20Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Olivia-umich/q-taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Jayanthini/phi-2-role-play
Jayanthini
2024-02-20T12:22:02Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-02-19T18:27:24Z
--- license: mit library_name: peft tags: - trl - sft - generated_from_trainer base_model: microsoft/phi-2 model-index: - name: phi-2-role-play results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-2-role-play This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.17.1 - Tokenizers 0.15.1
yeniceriSGK/miniCPM-pi-brain-v1-quantized
yeniceriSGK
2024-02-20T12:18:10Z
5
0
transformers
[ "transformers", "safetensors", "minicpm", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-02-20T12:17:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
salehardec/labse-narym
salehardec
2024-02-20T12:15:02Z
4
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-02-20T04:13:20Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 650 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "__main__.ChainScoreEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "warmupcosine", "steps_per_epoch": null, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) (2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) (3): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
binitt/buttons-train
binitt
2024-02-20T12:12:13Z
30
0
transformers
[ "transformers", "tensorboard", "safetensors", "detr", "object-detection", "generated_from_trainer", "base_model:facebook/detr-resnet-50", "base_model:finetune:facebook/detr-resnet-50", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2024-02-20T12:12:03Z
--- license: apache-2.0 base_model: facebook/detr-resnet-50 tags: - generated_from_trainer model-index: - name: buttons-train results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # buttons-train This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 200 ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
sisu16/q-FrozenLake-v1-4x4-noSlippery
sisu16
2024-02-20T12:05:07Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-02-20T12:05:05Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="sisu16/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
TurkuNLP/bert-base-finnish-uncased-v1
TurkuNLP
2024-02-20T11:56:25Z
1,143
0
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "fi", "arxiv:1912.07076", "arxiv:1908.04212", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: fi --- ## Quickstart **Release 1.0** (November 25, 2019) Download the models here: * Cased Finnish BERT Base: [bert-base-finnish-cased-v1.zip](http://dl.turkunlp.org/finbert/bert-base-finnish-cased-v1.zip) * Uncased Finnish BERT Base: [bert-base-finnish-uncased-v1.zip](http://dl.turkunlp.org/finbert/bert-base-finnish-uncased-v1.zip) We generally recommend the use of the cased model. Paper presenting Finnish BERT: [arXiv:1912.07076](https://arxiv.org/abs/1912.07076) ## What's this? A version of Google's [BERT](https://github.com/google-research/bert) deep transfer learning model for Finnish. The model can be fine-tuned to achieve state-of-the-art results for various Finnish natural language processing tasks. FinBERT features a custom 50,000 wordpiece vocabulary that has much better coverage of Finnish words than e.g. the previously released [multilingual BERT](https://github.com/google-research/bert/blob/master/multilingual.md) models from Google: | Vocabulary | Example | |------------|---------| | FinBERT | Suomessa vaihtuu kesän aikana sekä pääministeri että valtiovarain ##ministeri . | | Multilingual BERT | Suomessa vai ##htuu kes ##än aikana sekä p ##ää ##minister ##i että valt ##io ##vara ##in ##minister ##i . | FinBERT has been pre-trained for 1 million steps on over 3 billion tokens (24B characters) of Finnish text drawn from news, online discussion, and internet crawls. By contrast, Multilingual BERT was trained on Wikipedia texts, where the Finnish Wikipedia text is approximately 3% of the amount used to train FinBERT. These features allow FinBERT to outperform not only Multilingual BERT but also all previously proposed models when fine-tuned for Finnish natural language processing tasks. ## Results ### Document classification ![learning curves for Yle and Ylilauta document classification](https://raw.githubusercontent.com/TurkuNLP/FinBERT/master/img/yle-ylilauta-curves.png) FinBERT outperforms multilingual BERT (M-BERT) on document classification over a range of training set sizes on the Yle news (left) and Ylilauta online discussion (right) corpora. (Baseline classification performance with [FastText](https://fasttext.cc/) included for reference.) [[code](https://github.com/spyysalo/finbert-text-classification)][[Yle data](https://github.com/spyysalo/yle-corpus)] [[Ylilauta data](https://github.com/spyysalo/ylilauta-corpus)] ### Named Entity Recognition Evaluation on FiNER corpus ([Ruokolainen et al 2019](https://arxiv.org/abs/1908.04212)) | Model | Accuracy | |--------------------|----------| | **FinBERT** | **92.40%** | | Multilingual BERT | 90.29% | | [FiNER-tagger](https://github.com/Traubert/FiNer-rules) (rule-based) | 86.82% | (FiNER tagger results from [Ruokolainen et al. 2019](https://arxiv.org/pdf/1908.04212.pdf)) [[code](https://github.com/jouniluoma/keras-bert-ner)][[data](https://github.com/mpsilfve/finer-data)] ### Part of speech tagging Evaluation on three Finnish corpora annotated with [Universal Dependencies](https://universaldependencies.org/) part-of-speech tags: the Turku Dependency Treebank (TDT), FinnTreeBank (FTB), and Parallel UD treebank (PUD) | Model | TDT | FTB | PUD | |-------------------|-------------|-------------|-------------| | **FinBERT** | **98.23%** | **98.39%** | **98.08%** | | Multilingual BERT | 96.97% | 95.87% | 97.58% | [[code](https://github.com/spyysalo/bert-pos)][[data](http://hdl.handle.net/11234/1-2837)] ## Use with PyTorch If you want to use the model with the huggingface/transformers library, follow the steps in [huggingface_transformers.md](https://github.com/TurkuNLP/FinBERT/blob/master/huggingface_transformers.md) ## Previous releases ### Release 0.2 **October 24, 2019** Beta version of the BERT base uncased model trained from scratch on a corpus of Finnish news, online discussions, and crawled data. Download the model here: [bert-base-finnish-uncased.zip](http://dl.turkunlp.org/finbert/bert-base-finnish-uncased.zip) ### Release 0.1 **September 30, 2019** We release a beta version of the BERT base cased model trained from scratch on a corpus of Finnish news, online discussions, and crawled data. Download the model here: [bert-base-finnish-cased.zip](http://dl.turkunlp.org/finbert/bert-base-finnish-cased.zip)
whitefox123/w2v-bert-2.0-arabic-3
whitefox123
2024-02-20T11:55:22Z
5
1
transformers
[ "transformers", "safetensors", "wav2vec2-bert", "automatic-speech-recognition", "generated_from_trainer", "dataset:audiofolder", "base_model:facebook/w2v-bert-2.0", "base_model:finetune:facebook/w2v-bert-2.0", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-02-20T11:38:33Z
--- license: mit base_model: facebook/w2v-bert-2.0 tags: - generated_from_trainer datasets: - audiofolder metrics: - wer model-index: - name: w2v-bert-2.0-arabic-3 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: audiofolder type: audiofolder config: default split: test args: default metrics: - name: Wer type: wer value: 0.30018018018018017 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # w2v-bert-2.0-arabic-3 This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the audiofolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3283 - Wer: 0.3002 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.482 | 0.96 | 300 | 0.3283 | 0.3002 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu118 - Datasets 2.17.1 - Tokenizers 0.15.2
dataeaze/dataeaze-text2sql-codellama_7b_instruct-clinton_text_to_sql_v1
dataeaze
2024-02-20T11:46:02Z
33
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-to-sql", "text2sql", "nlp2sql", "nlp-to-sql", "SQL", "en", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-16T05:50:59Z
--- license: cc-by-nc-sa-4.0 language: - en library_name: transformers tags: - text-to-sql - text2sql - nlp2sql - nlp-to-sql - SQL --- # Model Card for text2sql <!-- Provide a quick summary of what the model is/does. --> LLM instruction finetuned for Text-to-SQL task. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [dataeaze systems pvt ltd](https://www.dataeaze.io/) - **Funded by :** [dataeaze systems pvt ltd](https://www.dataeaze.io/) - **Shared by :** [dataeaze systems pvt ltd](https://www.dataeaze.io/) - **Model type:** LlamaForCausalLM - **Language(s) (NLP):** English - **License:** [cc-by-nc-sa-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en) Model is made available under non-commercial use for research purposes only. For commercial usage please connect at [email protected] - **Finetuned from model :** [CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> Model can be used a tool to convert queries in expressed in natural language (English) to SQL statements ### Downstream Use <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> The model could be used as the initial stage in a data analytics / business intelligence application pipeline. ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> Model has been fine tuned on a specific task of converting English language statements to SQL queries. Any use beyond this is not guaranteed to be accurate. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> - **Bias:** Trained for English language only. - **Risk:** Guardrails are reliant on the base models CodeLlama (Llama2). Finetuning could impact this behaviour. - **Limitations:** Intended to be a small model optimised for inference. Does not provide SoTA results on accuracy. ## How to Get Started with the Model Use the code below to get started with the model. ``` import torch from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained( "dataeaze/dataeaze-text2sql-codellama_7b_instruct-clinton_text_to_sql_v1", torch_dtype=torch.bfloat16, device_map='auto' ) tokenizer = AutoTokenizer.from_pretrained("dataeaze/dataeaze-text2sql-codellama_7b_instruct-clinton_text_to_sql_v1") # print("model device :", model.device) tokenizer.pad_token = tokenizer.eos_token model.eval() prompt = """ Below are sql tables schemas paired with instruction that describes a task. Using valid SQLite, write a response that appropriately completes the request for the provided tables. ### Instruction: How many transactions were made by a customer in a specific month? ### Database: RewardsProgramDB61 ### Input: CREATE SCHEMA RewardsProgram; CREATE TABLE Customer ( CustomerID INT NOT NULL AUTO_INCREMENT, FirstName VARCHAR(50) NOT NULL, LastName VARCHAR(50) NOT NULL, Email VARCHAR(100) UNIQUE NOT NULL, Phone VARCHAR(20) UNIQUE, DateOfBirth DATE, PRIMARY KEY (CustomerID) ); CREATE TABLE Membership ( MembershipID INT NOT NULL AUTO_INCREMENT, MembershipType VARCHAR(50) NOT NULL, DiscountPercentage DECIMAL(5, 2) NOT NULL, ValidFrom DATETIME, ValidTo DATETIME, CustomerID INT NOT NULL, PRIMARY KEY (MembershipID), FOREIGN KEY (CustomerID) REFERENCES Customer(CustomerID) ); CREATE TABLE Transaction ( TransactionID INT NOT NULL AUTO_INCREMENT, TransactionDate TIMESTAMP, TotalAmount DECIMAL(10, 2) NOT NULL, CustomerID INT NOT NULL, PRIMARY KEY (TransactionID), FOREIGN KEY (CustomerID) REFERENCES Customer(CustomerID) ); CREATE TABLE TransactionDetail ( TransactionDetailID INT NOT NULL AUTO_INCREMENT, TransactionID INT NOT NULL, ProductID INT NOT NULL, Quantity INT NOT NULL, UnitPrice DECIMAL(10, 2) NOT NULL, PRIMARY KEY (TransactionDetailID), FOREIGN KEY (TransactionID) REFERENCES Transaction(TransactionID), FOREIGN KEY (ProductID) REFERENCES Product(ProductID) ); CREATE TABLE Product ( ProductID INT NOT NULL AUTO_INCREMENT, ProductName VARCHAR(100) NOT NULL, UnitPrice DECIMAL(10, 2) NOT NULL, AvailableQuantity INT NOT NULL, CreatedDate DATETIME, PRIMARY KEY (ProductID) ); ALTER TABLE Membership ADD CONSTRAINT FK_Membership_Customer FOREIGN KEY (CustomerID) REFERENCES Customer(CustomerID); ALTER TABLE TransactionDetail ADD CONSTRAINT FK_TransactionDetail_Transaction FOREIGN KEY (TransactionID) REFERENCES Transaction(TransactionID); ALTER TABLE TransactionDetail ADD CONSTRAINT FK_TransactionDetail_Product FOREIGN KEY (ProductID) REFERENCES Product(ProductID);" """ input_ids = tokenizer(prompt, padding=True, return_tensors='pt') outputs = model.generate( input_ids=input_ids['input_ids'].to(model.device), attention_mask=input_ids['attention_mask'].to(model.device), max_new_tokens=3072, ) generated_query = tokenizer.decode(outputs[0], skip_special_tokens=True) print(generated_query) ``` ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [SPIDER dataset Test Set](https://yale-lily.github.io/spider) #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> SQL queries are matched against the correct answer, with two types of evaluation * Execution with Values * Exact Set Match without Values ### Results ``` model-index: - name: dataeaze/dataeaze-text2sql-codellama_7b_instruct-dzsql results: - task: type: text-to-sql dataset: name: SPIDER 1.0 type: text-to-sql metrics: - name: Execution with Values type: Execution with Values value: 64.3 - name: Exact Set Match without Values type: Exact Set Match without Values value: 29.6 source: name: Spider 1.0 - Leaderboard url: https://yale-lily.github.io/spider ``` ## Model Card Authors * Suyash Chougule * Chittaranjan Rathod * Sourabh Daptardar ## Model Card Contact "dataeaze systems" <[email protected]>
Skier8402/marian-finetuned-kde4-en-to-fr
Skier8402
2024-02-20T11:45:17Z
12
1
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:kde4", "base_model:Helsinki-NLP/opus-mt-en-fr", "base_model:finetune:Helsinki-NLP/opus-mt-en-fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-02-20T10:53:49Z
--- license: apache-2.0 base_model: Helsinki-NLP/opus-mt-en-fr tags: - translation - generated_from_trainer datasets: - kde4 model-index: - name: marian-finetuned-kde4-en-to-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - eval_loss: 1.6967 - eval_bleu: 39.2747 - eval_runtime: 1030.7421 - eval_samples_per_second: 20.391 - eval_steps_per_second: 0.319 - step: 0 ## Model description More information needed ## Intended uses & limitations Mostly usable for substituting the language in KDE apps. They are a couple of computer science terms that the model with attempt to translate or not. Domain adaptation may come up due to the former generalized training that was done. ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.38.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
hamzabk01/roberta-finetuned-subjqa-movies_2
hamzabk01
2024-02-20T11:38:20Z
22
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "question-answering", "generated_from_trainer", "base_model:deepset/roberta-base-squad2", "base_model:finetune:deepset/roberta-base-squad2", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
question-answering
2024-01-02T11:25:33Z
--- license: cc-by-4.0 base_model: deepset/roberta-base-squad2 tags: - generated_from_trainer model-index: - name: roberta-finetuned-subjqa-movies_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-finetuned-subjqa-movies_2 This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
linoyts/huggy_lora_pivotal_1_repeats
linoyts
2024-02-20T11:37:28Z
4
0
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-02-20T10:52:42Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: 'a <s0><s1> emoji dressed as yoda' output: url: "image_0.png" - text: 'a <s0><s1> emoji dressed as yoda' output: url: "image_1.png" - text: 'a <s0><s1> emoji dressed as yoda' output: url: "image_2.png" - text: 'a <s0><s1> emoji dressed as yoda' output: url: "image_3.png" base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a <s0><s1> emoji license: openrail++ --- # SDXL LoRA DreamBooth - linoyts/huggy_lora_pivotal_1_repeats <Gallery /> ## Model description ### These are linoyts/huggy_lora_pivotal_1_repeats LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`huggy_lora_pivotal_1_repeats.safetensors` here 💾](/linoyts/huggy_lora_pivotal_1_repeats/blob/main/huggy_lora_pivotal_1_repeats.safetensors)**. - Place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `<lora:huggy_lora_pivotal_1_repeats:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). - *Embeddings*: download **[`huggy_lora_pivotal_1_repeats_emb.safetensors` here 💾](/linoyts/huggy_lora_pivotal_1_repeats/blob/main/huggy_lora_pivotal_1_repeats_emb.safetensors)**. - Place it on it on your `embeddings` folder - Use it by adding `huggy_lora_pivotal_1_repeats_emb` to your prompt. For example, `a huggy_lora_pivotal_1_repeats_emb emoji` (you need both the LoRA and the embeddings as they were trained together for this LoRA) ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('linoyts/huggy_lora_pivotal_1_repeats', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='linoyts/huggy_lora_pivotal_1_repeats', filename='huggy_lora_pivotal_1_repeats_emb.safetensors', repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) image = pipeline('a <s0><s1> emoji dressed as yoda').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Details All [Files & versions](/linoyts/huggy_lora_pivotal_1_repeats/tree/main). The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
VinayHajare/vizdoom_deathmatch_bots
VinayHajare
2024-02-20T11:29:24Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-29T11:40:59Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_deathmatch_bots type: doom_deathmatch_bots metrics: - type: mean_reward value: 0.50 +/- 1.20 name: mean_reward verified: false --- An **APPO** model trained on the **doom_deathmatch_bots** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r VinayHajare/vizdoom_deathmatch_bots ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m <path.to.enjoy.module> --algo=APPO --env=doom_deathmatch_bots --train_dir=./train_dir --experiment=vizdoom_deathmatch_bots ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m <path.to.train.module> --algo=APPO --env=doom_deathmatch_bots --train_dir=./train_dir --experiment=vizdoom_deathmatch_bots --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
Dimasnoufal/resnet18-flower-classification
Dimasnoufal
2024-02-20T11:17:31Z
0
0
null
[ "region:us" ]
null
2024-02-20T11:09:03Z
<h1>Flower Classifier</h1> finetuned from PyTorch ResNet18 <h2>Flower Label</h2> 0: Lotus<br> 1: Orchid<br> 2: Tulip<br> 3: Sunflower<br> 4: Lilly<br> <h2>How to Use:</h2> <pre><code class="python"> import torchvision from torchvision.models import resnet18 model = resnet18(weights=None) model_weights_path = 'resnet18_flower_classification.pth' model.load_state_dict(torch.load(model_weights_path)) model.eval() </pre>
speechbrain/s2st-transformer-fr-en-hubert-l6-k100-cvss
speechbrain
2024-02-20T11:15:07Z
6
3
speechbrain
[ "speechbrain", "speech-to-speech-translation", "fr", "en", "dataset:CVSS", "arxiv:2201.03713", "arxiv:2112.08352", "arxiv:2204.02967", "license:apache-2.0", "region:us" ]
null
2023-10-06T09:46:33Z
--- language: - fr - en inference: false tags: - speech-to-speech-translation - speechbrain license: apache-2.0 datasets: - CVSS --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # Speech-to-Unit Translation trained on CVSS This repository provides all the necessary tools for using a a speech-to-unit translation (S2UT) model using a pre-trained Wav2Vec 2.0 encoder and a transformer decoder on the [CVSS](https://arxiv.org/abs/2201.03713) dataset. The implementation is based on [Textless Speech-to-Speech Translation](https://arxiv.org/abs/2112.08352) and [Enhanced Direct Speech-to-Speech Translation Using Self-supervised Pre-training and Data Augmentatio](https://arxiv.org/abs/2204.02967) papers. The pre-trained model take as input waveform and produces discrete self-supervised representations as output. Typically, a vocoder (e.g., HiFiGAN Unit) is utilized on top of the S2UT model to produce waveform. To generate the discrete self-supervised representations, we employ a K-means clustering model trained on the 6th layer of HuBERT, with `k=100`. ## Install SpeechBrain First of all, please install tranformers and SpeechBrain with the following command: ``` pip install speechbrain transformers ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Perform speech-to-speech translation (S2ST) with S2UT model and the Vocoder ```python import torch import torchaudio from speechbrain.inference.ST import EncoderDecoderS2UT from speechbrain.inference.vocoders import UnitHIFIGAN # Intialize S2UT (Transformer) and Vocoder (HiFIGAN Unit) s2ut = EncoderDecoderS2UT.from_hparams(source="speechbrain/s2st-transformer-fr-en-hubert-l6-k100-cvss", savedir="tmpdir_s2ut") hifi_gan_unit = UnitHIFIGAN.from_hparams(source="speechbrain/tts-hifigan-unit-hubert-l6-k100-ljspeech", savedir="tmpdir_vocoder") # Running the S2UT model codes = s2ut.translate_file("speechbrain/s2st-transformer-fr-en-hubert-l6-k100-cvss/example-fr.wav") codes = torch.IntTensor(codes) # Running Vocoder (units-to-waveform) waveforms = hifi_gan_unit.decode_unit(codes) # Save the waverform torchaudio.save('example.wav',waveforms.squeeze(1), 16000) ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. #### Referencing SpeechBrain ``` @misc{SB2021, author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua }, title = {SpeechBrain}, year = {2021}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\\\url{https://github.com/speechbrain/speechbrain}}, } ``` #### About SpeechBrain SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains. Website: https://speechbrain.github.io/ GitHub: https://github.com/speechbrain/speechbrain
vadhri/a2c-PandaReachDense-v3
vadhri
2024-02-20T11:14:53Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-20T11:10:43Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.34 +/- 0.28 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
bigquant/Senku-70B-AWQ-4bit-GEMM
bigquant
2024-02-20T11:14:45Z
10
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "AWQ", "Miqu", "Senku", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "awq", "region:us" ]
text-generation
2024-02-20T10:51:15Z
--- library_name: transformers tags: - AWQ - Miqu - Senku --- A quantized version of [Shinoji Research's Senku 70B Full](https://huggingface.co/ShinojiResearch/Senku-70B-Full). Quantized through AWQ: Quantization settings: ``` {"zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM" } ```
Meretof/q-FrozenLake-v1-4x4-noSlippery
Meretof
2024-02-20T11:06:05Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-02-20T11:06:02Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="blobux/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```