modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
kanishka/smolm-autoreg-bpe-seed_111
kanishka
2024-03-26T01:05:08Z
145
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "opt", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-20T22:09:30Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: smolm-autoreg-bpe-seed_111 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smolm-autoreg-bpe-seed_111 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4759 - Accuracy: 0.5000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.003 - train_batch_size: 16 - eval_batch_size: 128 - seed: 111 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 24000 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 3.0508 | 1.0 | 2928 | 3.0180 | 0.4367 | | 2.7062 | 2.0 | 5856 | 2.7857 | 0.4600 | | 2.5923 | 3.0 | 8784 | 2.6900 | 0.4700 | | 2.5183 | 4.0 | 11712 | 2.6405 | 0.4760 | | 2.4632 | 5.0 | 14640 | 2.6110 | 0.4799 | | 2.4241 | 6.0 | 17568 | 2.5840 | 0.4835 | | 2.3815 | 7.0 | 20496 | 2.5728 | 0.4851 | | 2.3595 | 8.0 | 23424 | 2.5581 | 0.4867 | | 2.2838 | 9.0 | 26352 | 2.5014 | 0.4949 | | 2.1364 | 10.0 | 29280 | 2.4759 | 0.5000 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
ucmp137538/trained_weigths
ucmp137538
2024-03-26T00:55:35Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:adapter:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2024-03-23T15:14:38Z
--- library_name: peft tags: - trl - sft - generated_from_trainer base_model: meta-llama/Llama-2-7b-chat-hf model-index: - name: trained_weigths results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # trained_weigths This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5922 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.6798 | 1.0 | 694 | 0.5959 | | 0.538 | 2.0 | 1388 | 0.5740 | | 0.4497 | 3.0 | 2082 | 0.5717 | | 0.3353 | 4.0 | 2776 | 0.5922 | ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.2
yeye776/bert-kor-base
yeye776
2024-03-26T00:53:32Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-26T00:53:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ntvcie/GemmaVinhntV2
ntvcie
2024-03-26T00:45:10Z
3
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/gemma-7b-bnb-4bit", "base_model:finetune:unsloth/gemma-7b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-03-26T00:40:21Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - trl base_model: unsloth/gemma-7b-bnb-4bit --- # Uploaded model - **Developed by:** ntvcie - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-7b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
magjico/ppo-Pyramids
magjico
2024-03-26T00:41:44Z
16
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2024-03-26T00:41:41Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: magjico/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
yetanotherhif/jmg_mistral_7b_code
yetanotherhif
2024-03-26T00:40:46Z
8
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-08T19:57:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ksgk-fy/M7Percival_010.66-0.78-0.34-0.69-0.16-0.4-7B
Ksgk-fy
2024-03-26T00:40:45Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "base_model:AurelPx/Percival_01-7b-slerp", "base_model:merge:AurelPx/Percival_01-7b-slerp", "base_model:liminerity/M7-7b", "base_model:merge:liminerity/M7-7b", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-26T00:36:05Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - automerger base_model: - liminerity/M7-7b - AurelPx/Percival_01-7b-slerp --- ## 🧩 Configuration ```yaml slices: - sources: - model: liminerity/M7-7b layer_range: [0, 32] - model: AurelPx/Percival_01-7b-slerp layer_range: [0, 32] merge_method: slerp base_model: liminerity/M7-7b parameters: t: - filter: self_attn value: [0.660004154579889, 0.7825172167749694, 0.3387619390522808, 0.6943452585157117, 0.1642623077558668] - filter: mlp value: [0.33999584542011096, 0.21748278322503056, 0.3056547414842883, 0.3056547414842883, 0.8357376922441332] - value: 0.3961634851125484 dtype: bfloat16 random_seed: 0 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Ksgk-fy/M7Percival_010.66-0.78-0.34-0.69-0.16-0.4-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
thrunlab/sparse_llama_7b_hf_refined_web_70p_2024-03-25
thrunlab
2024-03-26T00:37:38Z
6
0
transformers
[ "transformers", "safetensors", "sparse_llama", "text-generation", "generated_from_trainer", "custom_code", "base_model:meta-llama/Llama-2-7b-hf", "base_model:finetune:meta-llama/Llama-2-7b-hf", "autotrain_compatible", "region:us" ]
text-generation
2024-03-25T12:40:34Z
--- base_model: meta-llama/Llama-2-7b-hf tags: - generated_from_trainer model-index: - name: sparse_llama_7b_hf_refined_web_70p_2024-03-25 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sparse_llama_7b_hf_refined_web_70p_2024-03-25 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1856 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 0 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.3676 | 0.01 | 25 | 2.5933 | | 2.3572 | 0.02 | 50 | 2.5793 | | 2.4503 | 0.02 | 75 | 2.5568 | | 2.3803 | 0.03 | 100 | 2.5265 | | 2.4451 | 0.04 | 125 | 2.4951 | | 2.2793 | 0.05 | 150 | 2.4778 | | 2.2444 | 0.06 | 175 | 2.4667 | | 2.406 | 0.06 | 200 | 2.4572 | | 2.3583 | 0.07 | 225 | 2.4508 | | 2.3262 | 0.08 | 250 | 2.4538 | | 2.258 | 0.09 | 275 | 2.4476 | | 2.2841 | 0.1 | 300 | 2.4456 | | 2.3232 | 0.1 | 325 | 2.4379 | | 2.2974 | 0.11 | 350 | 2.4353 | | 2.2216 | 0.12 | 375 | 2.4379 | | 2.3179 | 0.13 | 400 | 2.4340 | | 2.3006 | 0.14 | 425 | 2.4333 | | 2.2603 | 0.14 | 450 | 2.4333 | | 2.3371 | 0.15 | 475 | 2.4384 | | 2.3453 | 0.16 | 500 | 2.4328 | | 2.254 | 0.17 | 525 | 2.4306 | | 2.2423 | 0.18 | 550 | 2.4298 | | 2.3666 | 0.18 | 575 | 2.4293 | | 2.259 | 0.19 | 600 | 2.4298 | | 2.2786 | 0.2 | 625 | 2.4290 | | 2.3493 | 0.21 | 650 | 2.4275 | | 2.2532 | 0.22 | 675 | 2.4255 | | 2.2698 | 0.22 | 700 | 2.4233 | | 2.2949 | 0.23 | 725 | 2.4277 | | 2.1918 | 0.24 | 750 | 2.4268 | | 2.2762 | 0.25 | 775 | 2.4243 | | 2.3221 | 0.26 | 800 | 2.4256 | | 2.278 | 0.26 | 825 | 2.4273 | | 2.2406 | 0.27 | 850 | 2.4223 | | 2.2466 | 0.28 | 875 | 2.4252 | | 2.2199 | 0.29 | 900 | 2.4247 | | 2.4064 | 0.3 | 925 | 2.4259 | | 2.3672 | 0.3 | 950 | 2.4237 | | 2.3096 | 0.31 | 975 | 2.4226 | | 2.1979 | 0.32 | 1000 | 2.4257 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.1+cu121 - Datasets 2.15.0 - Tokenizers 0.15.2
commandai/ppo-LunarLander-v2
commandai
2024-03-26T00:29:25Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-26T00:29:06Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 262.20 +/- 17.07 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
zelus82/Obelix-Phi2-v0
zelus82
2024-03-26T00:26:50Z
131
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "merge", "mergekit", "lazymergekit", "zelus82/Obelix-Phi2", "amu/spin-phi2", "conversational", "custom_code", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-26T00:25:27Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - zelus82/Obelix-Phi2 - amu/spin-phi2 --- # Obelix-Phi2-v0 Obelix-Phi2-v0 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [zelus82/Obelix-Phi2](https://huggingface.co/zelus82/Obelix-Phi2) * [amu/spin-phi2](https://huggingface.co/amu/spin-phi2) ## 🧩 Configuration ```yaml slices: - sources: - model: zelus82/Obelix-Phi2 layer_range: [0, 32] - model: amu/spin-phi2 layer_range: [0, 32] merge_method: slerp base_model: zelus82/Obelix-Phi2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
louislu9911/convnextv2-tiny-1k-224-finetuned-cassava-leaf-disease
louislu9911
2024-03-26T00:22:20Z
160
0
transformers
[ "transformers", "safetensors", "convnextv2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/convnextv2-tiny-1k-224", "base_model:finetune:facebook/convnextv2-tiny-1k-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-03-26T00:07:46Z
--- license: apache-2.0 base_model: facebook/convnextv2-tiny-1k-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: convnextv2-tiny-1k-224-finetuned-cassava-leaf-disease results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.8649532710280374 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnextv2-tiny-1k-224-finetuned-cassava-leaf-disease This model is a fine-tuned version of [facebook/convnextv2-tiny-1k-224](https://huggingface.co/facebook/convnextv2-tiny-1k-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4109 - Accuracy: 0.8650 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 480 - eval_batch_size: 480 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 1920 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 7.8796 | 0.98 | 10 | 3.9572 | 0.1706 | | 2.3762 | 1.95 | 20 | 1.4334 | 0.6178 | | 1.1413 | 2.93 | 30 | 0.8877 | 0.6841 | | 0.7549 | 4.0 | 41 | 0.6403 | 0.7724 | | 0.5904 | 4.98 | 51 | 0.5366 | 0.8098 | | 0.5152 | 5.95 | 61 | 0.4799 | 0.8369 | | 0.4764 | 6.93 | 71 | 0.4567 | 0.8486 | | 0.4386 | 8.0 | 82 | 0.4421 | 0.8509 | | 0.4306 | 8.98 | 92 | 0.4381 | 0.8519 | | 0.4266 | 9.95 | 102 | 0.4296 | 0.8603 | | 0.4072 | 10.93 | 112 | 0.4196 | 0.8593 | | 0.4033 | 12.0 | 123 | 0.4127 | 0.8621 | | 0.3982 | 12.98 | 133 | 0.4125 | 0.8640 | | 0.3993 | 13.95 | 143 | 0.4097 | 0.8631 | | 0.3812 | 14.63 | 150 | 0.4109 | 0.8650 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.1
earlalvarado-pi/en_core_web_sm
earlalvarado-pi
2024-03-26T00:19:26Z
1
0
spacy
[ "spacy", "token-classification", "en", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
token-classification
2024-03-25T21:02:32Z
--- tags: - spacy - token-classification language: - en license: mit model-index: - name: en_core_web_sm results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.8454836771 - name: NER Recall type: recall value: 0.8456530449 - name: NER F Score type: f_score value: 0.8455683525 - task: name: TAG type: token-classification metrics: - name: TAG (XPOS) Accuracy type: accuracy value: 0.97246532 - task: name: UNLABELED_DEPENDENCIES type: token-classification metrics: - name: Unlabeled Attachment Score (UAS) type: f_score value: 0.9175304332 - task: name: LABELED_DEPENDENCIES type: token-classification metrics: - name: Labeled Attachment Score (LAS) type: f_score value: 0.89874821 - task: name: SENTS type: token-classification metrics: - name: Sentences F-Score type: f_score value: 0.9059485531 --- ### Details: https://spacy.io/models/en#en_core_web_sm This is a clone created to test handler.py creation. All rights reserved to owner of original model. English pipeline optimized for CPU. Components: tok2vec, tagger, parser, senter, ner, attribute_ruler, lemmatizer. | Feature | Description | | --- | --- | | **Name** | `en_core_web_sm` | | **Version** | `3.7.1` | | **spaCy** | `>=3.7.2,<3.8.0` | | **Default Pipeline** | `tok2vec`, `tagger`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` | | **Components** | `tok2vec`, `tagger`, `parser`, `senter`, `attribute_ruler`, `lemmatizer`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [OntoNotes 5](https://catalog.ldc.upenn.edu/LDC2013T19) (Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, Ann Houston)<br />[ClearNLP Constituent-to-Dependency Conversion](https://github.com/clir/clearnlp-guidelines/blob/master/md/components/dependency_conversion.md) (Emory University)<br />[WordNet 3.0](https://wordnet.princeton.edu/) (Princeton University) | | **License** | `MIT` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (113 labels for 3 components)</summary> | Component | Labels | | --- | --- | | **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, `_SP`, ```` | | **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `agent`, `amod`, `appos`, `attr`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `csubj`, `csubjpass`, `dative`, `dep`, `det`, `dobj`, `expl`, `intj`, `mark`, `meta`, `neg`, `nmod`, `npadvmod`, `nsubj`, `nsubjpass`, `nummod`, `oprd`, `parataxis`, `pcomp`, `pobj`, `poss`, `preconj`, `predet`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` | | **`ner`** | `CARDINAL`, `DATE`, `EVENT`, `FAC`, `GPE`, `LANGUAGE`, `LAW`, `LOC`, `MONEY`, `NORP`, `ORDINAL`, `ORG`, `PERCENT`, `PERSON`, `PRODUCT`, `QUANTITY`, `TIME`, `WORK_OF_ART` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_ACC` | 99.86 | | `TOKEN_P` | 99.57 | | `TOKEN_R` | 99.58 | | `TOKEN_F` | 99.57 | | `TAG_ACC` | 97.25 | | `SENTS_P` | 92.02 | | `SENTS_R` | 89.21 | | `SENTS_F` | 90.59 | | `DEP_UAS` | 91.75 | | `DEP_LAS` | 89.87 | | `ENTS_P` | 84.55 | | `ENTS_R` | 84.57 | | `ENTS_F` | 84.56 |
ChaoticNeutrals/Eris_PrimeV4-Vision-7B
ChaoticNeutrals
2024-03-26T00:18:06Z
48
4
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "base_model:ChaoticNeutrals/Eris_PrimeV3.075-Vision-7B", "base_model:merge:ChaoticNeutrals/Eris_PrimeV3.075-Vision-7B", "base_model:Nitral-Archive/Eris_PrimeV3.05-Vision-7B", "base_model:merge:Nitral-Archive/Eris_PrimeV3.05-Vision-7B", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-25T16:14:17Z
--- base_model: - Nitral-AI/Eris_PrimeV3.05-Vision-7B - Nitral-AI/Eris_PrimeV3.075-Vision-7B library_name: transformers tags: - mergekit - merge license: other --- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/xnDxqMZRVOAUfTSerDFJB.jpeg) # Eris Prime: Version 4.0 Somewhere between v3.05 and v3.075 in overall intelligence and rp capability. Quants Available Here Tahnks to Lewdiculus: https://huggingface.co/Lewdiculous/Eris_PrimeV4-Vision-7B-GGUF-IQ-Imatrix # Vision/multimodal capabilities: If you want to use vision functionality: * You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp). To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found inside this model repo. * You can load the **mmproj** by using the corresponding section in the interface: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/UX6Ubss2EPNAT3SKGMLe0.png)
sieciowe/Qra-gguf-PL
sieciowe
2024-03-26T00:16:20Z
3
0
null
[ "gguf", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-03-23T17:10:13Z
--- license: llama2 --- Skwantyzowane pliki polskich modeli Qra-1b, Qra-7b, Qra-13b z Politechniki Gdańskiej z ich pierwszych wersji dostępnych na profilu autorów - https://huggingface.co/OPI-PG --- ... uploading files in progress ...
NotoriousH2/v3_gptq
NotoriousH2
2024-03-26T00:15:06Z
4
0
transformers
[ "transformers", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-03-26T00:08:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
JayhC/Noromaid-v0.1-mixtral-8x7b-Instruct-v3-4.5bpw-h6-exl2-rpcal
JayhC
2024-03-26T00:04:26Z
3
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-25T23:51:01Z
--- license: cc-by-nc-4.0 --- 4.5bpw/h6 exl2 quantization of [NeverSleep/Noromaid-v0.1-mixtral-8x7b-Instruct-v3](https://huggingface.co/NeverSleep/Noromaid-v0.1-mixtral-8x7b-Instruct-v3) using [PIPPA](https://huggingface.co/datasets/royallab/PIPPA-cleaned) calibration dataset, to fully use my 31gb VRAM (-1 cuz windows..). --- **ORIGINAL CARD:** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/vwcJfOnL-2QDJ0ShfxRJ5.png) --- # Disclaimer: ## This model is experimental, do not expect everything to work. This model uses the Alpaca **prompting format**(or just directly download the SillyTavern instruct preset [here](https://files.catbox.moe/0ohmco.json)) --- Beeg noromaid on ***steroids***. Suitable for RP, ERP. This time based on Mixtral Instruct, seems to do wonders! This model was trained for 8h(v1) + 8h(v2) + 12h(v3) on customized modified datasets, focusing on RP, uncensoring, and a modified version of the Alpaca prompting (that was already used in LimaRP), which should be at the same conversational level as ChatLM or Llama2-Chat without adding any additional special tokens. If you wanna have more infos about this model(and v1 + v2) you can check out [my blog post](https://ikaridevgit.github.io/index.html?p=7&blog=blogid-6&bo=true) [Recommended settings - Settings 1](https://huggingface.co/NeverSleep/Noromaid-v0.1-mixtral-8x7b-v3/discussions/1) [Recommended settings - Settings 2 (idk if they are any good)](https://files.catbox.moe/fv4xhu.json) ## Credits: - Undi - IkariDev <!-- description start --> ## Description <!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) --> This repo contains FP16 files of Noromaid-v0.1-mixtral-8x7b-Instruct-v3. [FP16 - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-v0.1-mixtral-8x7b-Instruct-v3) <!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)--> <!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)--> <!-- [exl2[8bpw-8h] - by AzureBlack](https://huggingface.co/AzureBlack/Echidna-13b-v0.3-8bpw-8h-exl2)--> <!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)--> <!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)--> [GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-v0.1-mixtral-8x7b-Instruct-v3-GGUF) <!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)--> ## Ratings: Note: We have permission of all users to upload their ratings, we DONT screenshot random reviews without asking if we can put them here! No ratings yet! If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi". <!-- description end --> <!-- prompt-template start --> ### Custom format: ``` ### Instruction: {system prompt} ### Input: {input} ### Response: {reply} ``` ## Datasets used: - Aesir 1 and 2 ([MinervaAI](https://huggingface.co/MinervaAI) / [Gryphe](https://huggingface.co/Gryphe)) - [LimaRP-20231109](https://huggingface.co/datasets/lemonilia/LimaRP) ([Lemonilia](https://huggingface.co/lemonilia)) - [ToxicDPO-NoWarning](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-sharegpt) ([unalignment orga repo](https://huggingface.co/unalignment) + [Undi](https://huggingface.co/Undi95)) - [No-robots-ShareGPT](https://huggingface.co/datasets/Doctor-Shotgun/no-robots-sharegpt) ([Doctor-Shotgun](https://huggingface.co/Doctor-Shotgu)) ## Others Undi: If you want to support me, you can [here](https://ko-fi.com/undiai). IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek
tonio-m/ppo-Huggy
tonio-m
2024-03-26T00:00:18Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2024-03-26T00:00:16Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: tonio-m/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
jhamel/lora_model
jhamel
2024-03-25T23:56:08Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-25T23:55:59Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/mistral-7b-bnb-4bit --- # Uploaded model - **Developed by:** jhamel - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Holarissun/trl_rm_tldr_gptj
Holarissun
2024-03-25T23:30:07Z
160
1
peft
[ "peft", "safetensors", "generated_from_trainer", "arxiv:2403.12017", "base_model:EleutherAI/gpt-j-6b", "base_model:adapter:EleutherAI/gpt-j-6b", "license:apache-2.0", "region:us" ]
null
2024-01-12T16:41:17Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer metrics: - accuracy base_model: EleutherAI/gpt-j-6b model-index: - name: trl_rm_tldr_gptj results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # trl_rm_tldr_gptj This model is a fine-tuned version of [EleutherAI/gpt-j-6b](https://huggingface.co/EleutherAI/gpt-j-6b) on the TL;DR dataset. It achieves the following results on the evaluation set: - Loss: 0.6624 - Accuracy: 0.6857 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.41e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.5633 | 1.0 | 22660 | 0.6624 | 0.6857 | ### Framework versions - PEFT 0.7.1.dev0 - Transformers 4.36.2 - Pytorch 2.1.2 - Datasets 2.15.0 - Tokenizers 0.15.0 ### BibTex Citation If you would like to cite our paper when using the model, please use ``` @article{sun2024supervised, title={Supervised Fine-Tuning as Inverse Reinforcement Learning}, author={Sun, Hao}, journal={arXiv preprint arXiv:2403.12017}, year={2024} } ```
DaJulster/my_awesome_swag_model
DaJulster
2024-03-25T23:26:07Z
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "multiple-choice", "generated_from_trainer", "dataset:swag", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
multiple-choice
2024-03-25T18:09:59Z
--- license: apache-2.0 base_model: google-bert/bert-base-uncased tags: - generated_from_trainer datasets: - swag metrics: - accuracy model-index: - name: my_awesome_swag_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_swag_model This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the swag dataset. It achieves the following results on the evaluation set: - Loss: 0.7748 - Accuracy: 0.8005 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7515 | 1.0 | 2299 | 0.5735 | 0.7783 | | 0.3807 | 2.0 | 4598 | 0.5881 | 0.7972 | | 0.1533 | 3.0 | 6897 | 0.7748 | 0.8005 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.2 - Datasets 2.1.0 - Tokenizers 0.15.2
Smuggling1710/SonyaKAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp
Smuggling1710
2024-03-25T23:18:23Z
7
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "Smuggling1710/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp", "SanjiWatsuki/Sonya-7B", "base_model:SanjiWatsuki/Sonya-7B", "base_model:merge:SanjiWatsuki/Sonya-7B", "base_model:Smuggling1710/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp", "base_model:merge:Smuggling1710/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-25T23:13:12Z
--- tags: - merge - mergekit - lazymergekit - Smuggling1710/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp - SanjiWatsuki/Sonya-7B base_model: - Smuggling1710/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp - SanjiWatsuki/Sonya-7B --- # SonyaKAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp SonyaKAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Smuggling1710/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp](https://huggingface.co/Smuggling1710/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp) * [SanjiWatsuki/Sonya-7B](https://huggingface.co/SanjiWatsuki/Sonya-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: Smuggling1710/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp layer_range: [0, 32] - model: SanjiWatsuki/Sonya-7B layer_range: [0, 32] merge_method: slerp base_model: Smuggling1710/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Smuggling1710/SonyaKAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Flamgrise/DE_bios_Lol_Fine-tuned
Flamgrise
2024-03-25T23:15:03Z
104
0
transformers
[ "transformers", "tensorboard", "safetensors", "bart", "text-classification", "generated_from_trainer", "base_model:facebook/bart-large-mnli", "base_model:finetune:facebook/bart-large-mnli", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T23:13:44Z
--- license: mit base_model: facebook/bart-large-mnli tags: - generated_from_trainer metrics: - f1 model-index: - name: ENG-full-fined-tuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ENG-full-fined-tuned This model is a fine-tuned version of [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5407 - F1: 0.0724 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 45 | 1.7846 | 0.0698 | | No log | 2.0 | 90 | 1.7658 | 0.0698 | | No log | 3.0 | 135 | 1.7458 | 0.0698 | | No log | 4.0 | 180 | 1.7913 | 0.0698 | | No log | 5.0 | 225 | 1.7677 | 0.1386 | | No log | 6.0 | 270 | 1.8333 | 0.1000 | | No log | 7.0 | 315 | 2.1814 | 0.0607 | | No log | 8.0 | 360 | 2.2701 | 0.0781 | | No log | 9.0 | 405 | 2.3223 | 0.1206 | | No log | 10.0 | 450 | 2.4003 | 0.0879 | | No log | 11.0 | 495 | 2.4776 | 0.0870 | | 1.3449 | 12.0 | 540 | 2.5407 | 0.0724 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
bartowski/Tess-7B-v2.0-exl2
bartowski
2024-03-25T23:01:45Z
0
0
null
[ "text-generation", "license:apache-2.0", "region:us" ]
text-generation
2024-03-25T23:01:43Z
--- license: apache-2.0 quantized_by: bartowski pipeline_tag: text-generation --- ## Exllama v2 Quantizations of Tess-7B-v2.0 Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.16">turboderp's ExLlamaV2 v0.0.16</a> for quantization. <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b> Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/migtissera/Tess-7B-v2.0 | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description | | ----- | ---- | ------- | ------ | ------ | ------ | ------------ | | [8_0](https://huggingface.co/bartowski/Tess-7B-v2.0-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/bartowski/Tess-7B-v2.0-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. | | [5_0](https://huggingface.co/bartowski/Tess-7B-v2.0-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. | | [4_25](https://huggingface.co/bartowski/Tess-7B-v2.0-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. | | [3_5](https://huggingface.co/bartowski/Tess-7B-v2.0-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. | ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Tess-7B-v2.0-exl2 Tess-7B-v2.0-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Tess-7B-v2.0-exl2`: ```shell mkdir Tess-7B-v2.0-exl2 huggingface-cli download bartowski/Tess-7B-v2.0-exl2 --local-dir Tess-7B-v2.0-exl2 --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: Linux: ```shell mkdir Tess-7B-v2.0-exl2-6_5 huggingface-cli download bartowski/Tess-7B-v2.0-exl2 --revision 6_5 --local-dir Tess-7B-v2.0-exl2-6_5 --local-dir-use-symlinks False ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell mkdir Tess-7B-v2.0-exl2-6.5 huggingface-cli download bartowski/Tess-7B-v2.0-exl2 --revision 6_5 --local-dir Tess-7B-v2.0-exl2-6.5 --local-dir-use-symlinks False ``` Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
D3STRON/multi-genre
D3STRON
2024-03-25T23:00:16Z
144
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-24T09:51:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AlignmentResearch/robust_llm_pythia-tt-1b-mz-ada-v3-ch-140000
AlignmentResearch
2024-03-25T22:54:22Z
104
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-1b-deduped", "base_model:finetune:EleutherAI/pythia-1b-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T22:52:15Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-1b-deduped model-index: - name: robust_llm_pythia-tt-1b-mz-ada-v3-ch-140000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-1b-mz-ada-v3-ch-140000 This model is a fine-tuned version of [EleutherAI/pythia-1b-deduped](https://huggingface.co/EleutherAI/pythia-1b-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
AlignmentResearch/robust_llm_pythia-tt-1b-mz-ada-v3-ch-137000
AlignmentResearch
2024-03-25T22:54:19Z
104
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-1b-deduped", "base_model:finetune:EleutherAI/pythia-1b-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T22:52:17Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-1b-deduped model-index: - name: robust_llm_pythia-tt-1b-mz-ada-v3-ch-137000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-1b-mz-ada-v3-ch-137000 This model is a fine-tuned version of [EleutherAI/pythia-1b-deduped](https://huggingface.co/EleutherAI/pythia-1b-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
AlignmentResearch/robust_llm_pythia-tt-1b-mz-ada-v3-ch-136000
AlignmentResearch
2024-03-25T22:52:57Z
104
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-1b-deduped", "base_model:finetune:EleutherAI/pythia-1b-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T22:50:59Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-1b-deduped model-index: - name: robust_llm_pythia-tt-1b-mz-ada-v3-ch-136000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-1b-mz-ada-v3-ch-136000 This model is a fine-tuned version of [EleutherAI/pythia-1b-deduped](https://huggingface.co/EleutherAI/pythia-1b-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
louislu9911/resnet-152-finetuned-cassava-leaf-disease
louislu9911
2024-03-25T22:51:59Z
57
0
transformers
[ "transformers", "safetensors", "resnet", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/resnet-152", "base_model:finetune:microsoft/resnet-152", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-03-25T10:16:38Z
--- license: apache-2.0 base_model: microsoft/resnet-152 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: resnet-152-finetuned-cassava-leaf-disease results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.7397196261682243 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # resnet-152-finetuned-cassava-leaf-disease This model is a fine-tuned version of [microsoft/resnet-152](https://huggingface.co/microsoft/resnet-152) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7961 - Accuracy: 0.7397 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 480 - eval_batch_size: 480 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 1920 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 7.309 | 0.98 | 10 | 7.0088 | 0.0028 | | 6.9946 | 1.95 | 20 | 6.4363 | 0.0061 | | 6.4082 | 2.93 | 30 | 5.5840 | 0.0673 | | 5.6018 | 4.0 | 41 | 4.1884 | 0.3687 | | 4.5652 | 4.98 | 51 | 3.3123 | 0.4640 | | 3.6106 | 5.95 | 61 | 2.7918 | 0.5136 | | 2.9184 | 6.93 | 71 | 2.3762 | 0.5636 | | 2.3775 | 8.0 | 82 | 1.9163 | 0.6084 | | 2.0119 | 8.98 | 92 | 1.7038 | 0.6299 | | 1.7519 | 9.95 | 102 | 1.5220 | 0.6411 | | 1.4995 | 10.93 | 112 | 1.3828 | 0.6575 | | 1.3648 | 12.0 | 123 | 1.2715 | 0.6668 | | 1.2357 | 12.98 | 133 | 1.2040 | 0.6692 | | 1.1606 | 13.95 | 143 | 1.1249 | 0.6785 | | 1.0793 | 14.93 | 153 | 1.0600 | 0.6897 | | 1.0332 | 16.0 | 164 | 1.0160 | 0.6935 | | 0.9724 | 16.98 | 174 | 0.9706 | 0.7047 | | 0.9349 | 17.95 | 184 | 0.9524 | 0.7075 | | 0.895 | 18.93 | 194 | 0.9210 | 0.7093 | | 0.8913 | 20.0 | 205 | 0.9007 | 0.7168 | | 0.8519 | 20.98 | 215 | 0.8672 | 0.7229 | | 0.8434 | 21.95 | 225 | 0.8432 | 0.7252 | | 0.8346 | 22.93 | 235 | 0.8307 | 0.7304 | | 0.8019 | 24.0 | 246 | 0.8154 | 0.7308 | | 0.8001 | 24.98 | 256 | 0.8121 | 0.7327 | | 0.7813 | 25.95 | 266 | 0.8036 | 0.7341 | | 0.7845 | 26.93 | 276 | 0.8025 | 0.7383 | | 0.7635 | 28.0 | 287 | 0.7934 | 0.7444 | | 0.7782 | 28.98 | 297 | 0.7910 | 0.7421 | | 0.7634 | 29.27 | 300 | 0.7961 | 0.7397 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.1
dyingc/Llama-2-7b-chat-hf-quant
dyingc
2024-03-25T22:48:53Z
76
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-03-25T22:25:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
andrewmwells/distilbert-base-uncased-finetuned-emotion
andrewmwells
2024-03-25T22:47:32Z
118
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-04T17:52:13Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.927 - name: F1 type: f1 value: 0.9269759151801947 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2217 - Accuracy: 0.927 - F1: 0.9270 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 250 | 0.3219 | 0.9085 | 0.9076 | | No log | 2.0 | 500 | 0.2217 | 0.927 | 0.9270 | ### Framework versions - Transformers 4.35.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.14.3 - Tokenizers 0.14.1
AlignmentResearch/robust_llm_pythia-tt-1b-mz-ada-v3-ch-139000
AlignmentResearch
2024-03-25T22:46:00Z
104
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-1b-deduped", "base_model:finetune:EleutherAI/pythia-1b-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T22:43:58Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-1b-deduped model-index: - name: robust_llm_pythia-tt-1b-mz-ada-v3-ch-139000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-1b-mz-ada-v3-ch-139000 This model is a fine-tuned version of [EleutherAI/pythia-1b-deduped](https://huggingface.co/EleutherAI/pythia-1b-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
AlignmentResearch/robust_llm_pythia-tt-410m-mz-ada-v3-ch-136000
AlignmentResearch
2024-03-25T22:34:58Z
104
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-410m-deduped", "base_model:finetune:EleutherAI/pythia-410m-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T22:33:55Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-410m-deduped model-index: - name: robust_llm_pythia-tt-410m-mz-ada-v3-ch-136000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-410m-mz-ada-v3-ch-136000 This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
AlignmentResearch/robust_llm_pythia-tt-410m-mz-ada-v3-ch-137000
AlignmentResearch
2024-03-25T22:33:41Z
103
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-410m-deduped", "base_model:finetune:EleutherAI/pythia-410m-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T22:32:47Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-410m-deduped model-index: - name: robust_llm_pythia-tt-410m-mz-ada-v3-ch-137000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-410m-mz-ada-v3-ch-137000 This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
AlignmentResearch/robust_llm_pythia-tt-410m-mz-ada-v3-ch-140000
AlignmentResearch
2024-03-25T22:33:00Z
104
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-410m-deduped", "base_model:finetune:EleutherAI/pythia-410m-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T22:31:58Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-410m-deduped model-index: - name: robust_llm_pythia-tt-410m-mz-ada-v3-ch-140000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-410m-mz-ada-v3-ch-140000 This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
AlignmentResearch/robust_llm_pythia-tt-410m-mz-ada-v3-ch-142000
AlignmentResearch
2024-03-25T22:33:00Z
103
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-410m-deduped", "base_model:finetune:EleutherAI/pythia-410m-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T22:32:00Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-410m-deduped model-index: - name: robust_llm_pythia-tt-410m-mz-ada-v3-ch-142000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-410m-mz-ada-v3-ch-142000 This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
AlignmentResearch/robust_llm_pythia-tt-410m-mz-ada-v3-ch-134000
AlignmentResearch
2024-03-25T22:31:53Z
104
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-410m-deduped", "base_model:finetune:EleutherAI/pythia-410m-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T22:31:00Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-410m-deduped model-index: - name: robust_llm_pythia-tt-410m-mz-ada-v3-ch-134000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-410m-mz-ada-v3-ch-134000 This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
AlignmentResearch/robust_llm_pythia-tt-160m-mz-ada-v3-ch-134000
AlignmentResearch
2024-03-25T22:24:48Z
103
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-160m-deduped", "base_model:finetune:EleutherAI/pythia-160m-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T22:24:24Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-160m-deduped model-index: - name: robust_llm_pythia-tt-160m-mz-ada-v3-ch-134000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-160m-mz-ada-v3-ch-134000 This model is a fine-tuned version of [EleutherAI/pythia-160m-deduped](https://huggingface.co/EleutherAI/pythia-160m-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
JuanMa360/kitchen-layouts-2.3.0-86M
JuanMa360
2024-03-25T22:24:42Z
316
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "pytorch", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-03-25T22:24:37Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: kitchen-layouts-2.3.0-86M results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: Not reported --- # kitchen-layouts-2.3.0-86M Kitchen Layouts detection🤗🖼️ ## Example Images #### g_shaped_kitchen ![g_shaped_kitchen](images/g_shaped_kitchen.JPEG) #### galley_kitchen ![galley_kitchen](images/galley_kitchen.JPEG) #### island_kitchen ![island_kitchen](images/island_kitchen.JPEG) #### l_shaped_kitchen ![l_shaped_kitchen](images/l_shaped_kitchen.JPEG) #### single_wall_kitchen ![single_wall_kitchen](images/single_wall_kitchen.jpg) #### u_shaped_kitchen ![u_shaped_kitchen](images/u_shaped_kitchen.JPEG)
AlignmentResearch/robust_llm_pythia-tt-160m-mz-ada-v3-ch-142000
AlignmentResearch
2024-03-25T22:23:26Z
104
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-160m-deduped", "base_model:finetune:EleutherAI/pythia-160m-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T22:23:00Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-160m-deduped model-index: - name: robust_llm_pythia-tt-160m-mz-ada-v3-ch-142000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-160m-mz-ada-v3-ch-142000 This model is a fine-tuned version of [EleutherAI/pythia-160m-deduped](https://huggingface.co/EleutherAI/pythia-160m-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
AlignmentResearch/robust_llm_pythia-tt-2.8b-mz-ada-v3-ch-141000
AlignmentResearch
2024-03-25T22:23:23Z
4
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-2.8b-deduped", "base_model:finetune:EleutherAI/pythia-2.8b-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T22:20:16Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-2.8b-deduped model-index: - name: robust_llm_pythia-tt-2.8b-mz-ada-v3-ch-141000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-2.8b-mz-ada-v3-ch-141000 This model is a fine-tuned version of [EleutherAI/pythia-2.8b-deduped](https://huggingface.co/EleutherAI/pythia-2.8b-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
AlignmentResearch/robust_llm_pythia-tt-160m-mz-ada-v3-ch-137000
AlignmentResearch
2024-03-25T22:21:04Z
104
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-160m-deduped", "base_model:finetune:EleutherAI/pythia-160m-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T22:20:40Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-160m-deduped model-index: - name: robust_llm_pythia-tt-160m-mz-ada-v3-ch-137000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-160m-mz-ada-v3-ch-137000 This model is a fine-tuned version of [EleutherAI/pythia-160m-deduped](https://huggingface.co/EleutherAI/pythia-160m-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
AlignmentResearch/robust_llm_pythia-tt-2.8b-mz-ada-v3-ch-138000
AlignmentResearch
2024-03-25T22:20:21Z
3
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-2.8b-deduped", "base_model:finetune:EleutherAI/pythia-2.8b-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T22:17:12Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-2.8b-deduped model-index: - name: robust_llm_pythia-tt-2.8b-mz-ada-v3-ch-138000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-2.8b-mz-ada-v3-ch-138000 This model is a fine-tuned version of [EleutherAI/pythia-2.8b-deduped](https://huggingface.co/EleutherAI/pythia-2.8b-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
7vq0ir/alcy9
7vq0ir
2024-03-25T22:20:15Z
251
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-31T17:35:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mole-code/llama_index-codegen-2B-multi-fft
mole-code
2024-03-25T22:20:03Z
7
0
transformers
[ "transformers", "safetensors", "codegen", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-03-25T22:15:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MaksimTw/gemma-7b-it-tw-txt2sql
MaksimTw
2024-03-25T22:19:14Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "gemma", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:google/gemma-7b-it", "base_model:adapter:google/gemma-7b-it", "license:other", "region:us" ]
null
2024-03-23T00:05:15Z
--- license: other library_name: peft tags: - trl - sft - generated_from_trainer datasets: - generator base_model: google/gemma-7b-it model-index: - name: gemma-7b-it-tw-txt2sql results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gemma-7b-it-tw-txt2sql This model is a fine-tuned version of [google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.38.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.2
Solshine/LORA-Adapters-Mistral7B-NaturalFarmerV3
Solshine
2024-03-25T22:18:55Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:other", "endpoints_compatible", "region:us" ]
null
2024-03-21T20:23:19Z
--- language: - en license: other tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit --- # Uploaded model - **Developed by:** Caleb DeLeeuw; Copyleft Cultivars, a nonprofit - **License:** Hippocratic 3.0 CL-Eco-Extr [![Hippocratic License HL3-CL-ECO-EXTR](https://img.shields.io/static/v1?label=Hippocratic%20License&message=HL3-CL-ECO-EXTR&labelColor=5e2751&color=bc8c3d)](https://firstdonoharm.dev/version/3/0/cl-eco-extr.html) https://firstdonoharm.dev/version/3/0/cl-eco-extr.html - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit Using real-world user data from a previous farmer assistant chatbot service and additional curated datasets (prioritizing sustainable regenerative organic farming practices,) Gemma 2B and Mistral 7B LLMs were iteratively fine-tuned and tested against eachother as well as basic benchmarking, whereby the Gemma 2B fine-tune emerged victorious. LORA adapters were saved for each model. V3 here scored better in agriculture-focused prelim testing than V1 or V2 of the Mistral series of fine-tunes for the selected dataset. This mistral model was trained with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
AlignmentResearch/robust_llm_pythia-tt-160m-mz-ada-v3-ch-140000
AlignmentResearch
2024-03-25T22:17:56Z
104
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-160m-deduped", "base_model:finetune:EleutherAI/pythia-160m-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T22:17:31Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-160m-deduped model-index: - name: robust_llm_pythia-tt-160m-mz-ada-v3-ch-140000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-160m-mz-ada-v3-ch-140000 This model is a fine-tuned version of [EleutherAI/pythia-160m-deduped](https://huggingface.co/EleutherAI/pythia-160m-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
AlignmentResearch/robust_llm_pythia-tt-160m-mz-ada-v3-ch-136000
AlignmentResearch
2024-03-25T22:17:03Z
104
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-160m-deduped", "base_model:finetune:EleutherAI/pythia-160m-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T22:16:36Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-160m-deduped model-index: - name: robust_llm_pythia-tt-160m-mz-ada-v3-ch-136000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-160m-mz-ada-v3-ch-136000 This model is a fine-tuned version of [EleutherAI/pythia-160m-deduped](https://huggingface.co/EleutherAI/pythia-160m-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
AlignmentResearch/robust_llm_pythia-tt-70m-mz-ada-v3-ch-139000
AlignmentResearch
2024-03-25T22:14:55Z
103
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-70m-deduped", "base_model:finetune:EleutherAI/pythia-70m-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T22:14:42Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-70m-deduped model-index: - name: robust_llm_pythia-tt-70m-mz-ada-v3-ch-139000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-70m-mz-ada-v3-ch-139000 This model is a fine-tuned version of [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
AlignmentResearch/robust_llm_pythia-tt-14m-mz-ada-v3-ch-140000
AlignmentResearch
2024-03-25T22:11:34Z
103
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-14m", "base_model:finetune:EleutherAI/pythia-14m", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T22:11:25Z
--- tags: - generated_from_trainer base_model: EleutherAI/pythia-14m model-index: - name: robust_llm_pythia-tt-14m-mz-ada-v3-ch-140000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-14m-mz-ada-v3-ch-140000 This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
mole-code/llama_index-codegen-2B-multi-lora
mole-code
2024-03-25T22:10:40Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-25T22:10:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DiogoF/Codenames-16000-V1
DiogoF
2024-03-25T22:10:21Z
1
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-03-25T17:51:59Z
--- license: creativeml-openrail-m library_name: diffusers tags: - text-to-image - dreambooth - diffusers-training - stable-diffusion - stable-diffusion-diffusers base_model: CompVis/stable-diffusion-v1-4 inference: true instance_prompt: the <codenames> style --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - DiogoF/Codenames-16000-V1 This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on the <codenames> style using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
AlignmentResearch/robust_llm_pythia-tt-70m-mz-ada-v3-ch-134000
AlignmentResearch
2024-03-25T22:07:58Z
103
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-70m-deduped", "base_model:finetune:EleutherAI/pythia-70m-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T22:07:46Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-70m-deduped model-index: - name: robust_llm_pythia-tt-70m-mz-ada-v3-ch-134000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-70m-mz-ada-v3-ch-134000 This model is a fine-tuned version of [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
AlignmentResearch/robust_llm_pythia-tt-14m-mz-ada-v3-ch-142000
AlignmentResearch
2024-03-25T22:07:36Z
103
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-14m", "base_model:finetune:EleutherAI/pythia-14m", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T22:07:32Z
--- tags: - generated_from_trainer base_model: EleutherAI/pythia-14m model-index: - name: robust_llm_pythia-tt-14m-mz-ada-v3-ch-142000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-14m-mz-ada-v3-ch-142000 This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
AlignmentResearch/robust_llm_pythia-tt-31m-mz-ada-v3-ch-139000
AlignmentResearch
2024-03-25T22:07:00Z
104
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-31m", "base_model:finetune:EleutherAI/pythia-31m", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T22:06:52Z
--- tags: - generated_from_trainer base_model: EleutherAI/pythia-31m model-index: - name: robust_llm_pythia-tt-31m-mz-ada-v3-ch-139000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-31m-mz-ada-v3-ch-139000 This model is a fine-tuned version of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
AlignmentResearch/robust_llm_pythia-tt-2.8b-mz-ada-v3-ch-129000
AlignmentResearch
2024-03-25T22:03:45Z
3
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-2.8b-deduped", "base_model:finetune:EleutherAI/pythia-2.8b-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T22:00:27Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-2.8b-deduped model-index: - name: robust_llm_pythia-tt-2.8b-mz-ada-v3-ch-129000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-2.8b-mz-ada-v3-ch-129000 This model is a fine-tuned version of [EleutherAI/pythia-2.8b-deduped](https://huggingface.co/EleutherAI/pythia-2.8b-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
AlignmentResearch/robust_llm_pythia-tt-2.8b-mz-ada-v3-ch-132000
AlignmentResearch
2024-03-25T22:03:09Z
3
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-2.8b-deduped", "base_model:finetune:EleutherAI/pythia-2.8b-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T21:59:26Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-2.8b-deduped model-index: - name: robust_llm_pythia-tt-2.8b-mz-ada-v3-ch-132000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-2.8b-mz-ada-v3-ch-132000 This model is a fine-tuned version of [EleutherAI/pythia-2.8b-deduped](https://huggingface.co/EleutherAI/pythia-2.8b-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
AlignmentResearch/robust_llm_pythia-tt-14m-mz-ada-v3-ch-137000
AlignmentResearch
2024-03-25T22:02:56Z
104
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-14m", "base_model:finetune:EleutherAI/pythia-14m", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T22:02:51Z
--- tags: - generated_from_trainer base_model: EleutherAI/pythia-14m model-index: - name: robust_llm_pythia-tt-14m-mz-ada-v3-ch-137000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-14m-mz-ada-v3-ch-137000 This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
AlignmentResearch/robust_llm_pythia-tt-31m-mz-ada-v3-ch-142000
AlignmentResearch
2024-03-25T21:59:48Z
103
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-31m", "base_model:finetune:EleutherAI/pythia-31m", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T21:59:37Z
--- tags: - generated_from_trainer base_model: EleutherAI/pythia-31m model-index: - name: robust_llm_pythia-tt-31m-mz-ada-v3-ch-142000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-31m-mz-ada-v3-ch-142000 This model is a fine-tuned version of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
blockblockblock/Code-Mistral-7B-bpw6
blockblockblock
2024-03-25T21:59:02Z
2
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "code", "mathematics", "conversational", "en", "dataset:ajibawa-2023/Code-290k-ShareGPT", "dataset:m-a-p/Code-Feedback", "dataset:microsoft/orca-math-word-problems-200k", "dataset:teknium/openhermes", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "6-bit", "exl2", "region:us" ]
text-generation
2024-03-25T21:56:55Z
--- license: apache-2.0 datasets: - ajibawa-2023/Code-290k-ShareGPT - m-a-p/Code-Feedback - microsoft/orca-math-word-problems-200k - teknium/openhermes language: - en tags: - code - mathematics --- **Code-Mistral-7B** This Model is trained on refined version of my dataset [Code-290k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-290k-ShareGPT). Besides this it is trained on following datasets: [Code-Feedback](https://huggingface.co/datasets/m-a-p/Code-Feedback) [orca-math-word-problems-200k](https://huggingface.co/datasets/microsoft/orca-math-word-problems-200k) [Openhermes](https://huggingface.co/datasets/teknium/openhermes) The idea was to check how this Model will perform with both Code & Maths datasets. This model is very good with Coding. Maths is still hit & miss but you can test out this model. This Model is trained on massive datasets so the results are very good. I have used ChatML prompt format. Kindly note this is qLoRA version, a rare exception. **Training:** Entire dataset was trained on 4 x A100 80GB. For 3 epoch, training took almost 33 Hours. Axolotl codebase was used for training purpose. Entire data is trained on Mistral. **Example Prompt:** This model uses **ChatML** prompt format. ``` <|im_start|>system You are a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` You can modify above Prompt as per your requirement. I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development. Thank you for your love & support. **Example Output** **C++** ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/jcmEZSRX7s7-B_ZybWwwN.jpeg) **Error Resolving** ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/iy89IxjiZXAY4Id-ieLg7.jpeg) **Matrices** ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/zFfq9lBA63wQzy0tP3_hd.jpeg) **Machine Learning** ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/Nv8dCpNxRtJGkOuulKzmn.jpeg)
AlignmentResearch/robust_llm_pythia-tt-70m-mz-ada-v3-ch-136000
AlignmentResearch
2024-03-25T21:59:00Z
103
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-70m-deduped", "base_model:finetune:EleutherAI/pythia-70m-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T21:58:46Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-70m-deduped model-index: - name: robust_llm_pythia-tt-70m-mz-ada-v3-ch-136000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-70m-mz-ada-v3-ch-136000 This model is a fine-tuned version of [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
rshrott/renovation
rshrott
2024-03-25T21:58:39Z
9
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:renovation", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-03-23T17:59:17Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - image-classification - generated_from_trainer datasets: - renovation metrics: - accuracy model-index: - name: renovation results: - task: name: Image Classification type: image-classification dataset: name: beans type: renovation config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.7219562243502052 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # renovation This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.6830 - Accuracy: 0.7220 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0475 | 0.07 | 100 | 1.0332 | 0.5824 | | 0.8651 | 0.14 | 200 | 0.9322 | 0.6204 | | 1.0022 | 0.21 | 300 | 1.2150 | 0.5147 | | 1.0636 | 0.27 | 400 | 0.9523 | 0.6252 | | 0.8311 | 0.34 | 500 | 0.8440 | 0.6556 | | 0.88 | 0.41 | 600 | 0.8707 | 0.6495 | | 0.8881 | 0.48 | 700 | 0.8903 | 0.6334 | | 0.7522 | 0.55 | 800 | 0.8479 | 0.6577 | | 0.798 | 0.62 | 900 | 0.7739 | 0.6843 | | 0.7317 | 0.68 | 1000 | 0.7856 | 0.6795 | | 0.8372 | 0.75 | 1100 | 0.8884 | 0.6354 | | 0.6629 | 0.82 | 1200 | 0.7573 | 0.6871 | | 0.7767 | 0.89 | 1300 | 0.7543 | 0.6860 | | 0.9246 | 0.96 | 1400 | 0.7896 | 0.6635 | | 0.5026 | 1.03 | 1500 | 0.7872 | 0.6813 | | 0.7599 | 1.1 | 1600 | 0.7861 | 0.6758 | | 0.5764 | 1.16 | 1700 | 0.8088 | 0.6802 | | 0.4329 | 1.23 | 1800 | 0.7281 | 0.7059 | | 0.6271 | 1.3 | 1900 | 0.7291 | 0.7117 | | 0.5498 | 1.37 | 2000 | 0.7745 | 0.7059 | | 0.5247 | 1.44 | 2100 | 0.8002 | 0.6891 | | 0.4891 | 1.51 | 2200 | 0.7014 | 0.7100 | | 0.5211 | 1.57 | 2300 | 0.7725 | 0.6864 | | 0.659 | 1.64 | 2400 | 0.7477 | 0.7086 | | 0.4878 | 1.71 | 2500 | 0.7129 | 0.7052 | | 0.4941 | 1.78 | 2600 | 0.6830 | 0.7220 | | 0.4648 | 1.85 | 2700 | 0.7182 | 0.7028 | | 0.5501 | 1.92 | 2800 | 0.7191 | 0.7144 | | 0.5491 | 1.98 | 2900 | 0.7132 | 0.7155 | | 0.2373 | 2.05 | 3000 | 0.7831 | 0.7096 | | 0.2756 | 2.12 | 3100 | 0.7965 | 0.7247 | | 0.2299 | 2.19 | 3200 | 0.8241 | 0.7220 | | 0.2323 | 2.26 | 3300 | 0.8286 | 0.7110 | | 0.1979 | 2.33 | 3400 | 0.7993 | 0.7302 | | 0.2507 | 2.4 | 3500 | 0.8477 | 0.7189 | | 0.205 | 2.46 | 3600 | 0.8197 | 0.7124 | | 0.35 | 2.53 | 3700 | 0.8348 | 0.7127 | | 0.3372 | 2.6 | 3800 | 0.8999 | 0.7199 | | 0.1968 | 2.67 | 3900 | 0.8263 | 0.7274 | | 0.1443 | 2.74 | 4000 | 0.8704 | 0.7244 | | 0.1933 | 2.81 | 4100 | 0.8270 | 0.7244 | | 0.2044 | 2.87 | 4200 | 0.8323 | 0.7274 | | 0.2709 | 2.94 | 4300 | 0.8494 | 0.7295 | | 0.1021 | 3.01 | 4400 | 0.8573 | 0.7336 | | 0.0393 | 3.08 | 4500 | 0.9333 | 0.7377 | | 0.0973 | 3.15 | 4600 | 0.9646 | 0.7336 | | 0.0317 | 3.22 | 4700 | 0.9820 | 0.7336 | | 0.0458 | 3.29 | 4800 | 1.0716 | 0.7326 | | 0.164 | 3.35 | 4900 | 1.0889 | 0.7312 | | 0.0578 | 3.42 | 5000 | 1.1011 | 0.7312 | | 0.0563 | 3.49 | 5100 | 1.1010 | 0.7356 | | 0.0318 | 3.56 | 5200 | 1.0923 | 0.7343 | | 0.0255 | 3.63 | 5300 | 1.1156 | 0.7332 | | 0.0169 | 3.7 | 5400 | 1.1050 | 0.7415 | | 0.0629 | 3.76 | 5500 | 1.1132 | 0.7373 | | 0.0627 | 3.83 | 5600 | 1.1110 | 0.7380 | | 0.0078 | 3.9 | 5700 | 1.1117 | 0.7350 | | 0.027 | 3.97 | 5800 | 1.1201 | 0.7343 | ### Framework versions - Transformers 4.39.1 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
AlignmentResearch/robust_llm_pythia-tt-2.8b-mz-ada-v3-ch-117000
AlignmentResearch
2024-03-25T21:54:54Z
3
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-2.8b-deduped", "base_model:finetune:EleutherAI/pythia-2.8b-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T21:51:41Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-2.8b-deduped model-index: - name: robust_llm_pythia-tt-2.8b-mz-ada-v3-ch-117000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-2.8b-mz-ada-v3-ch-117000 This model is a fine-tuned version of [EleutherAI/pythia-2.8b-deduped](https://huggingface.co/EleutherAI/pythia-2.8b-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
cerpintaxt/finetuning-emotion-model
cerpintaxt
2024-03-25T21:53:43Z
118
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T19:10:08Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: finetuning-emotion-model results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9245 - name: F1 type: f1 value: 0.9246560028548105 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-emotion-model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2224 - Accuracy: 0.9245 - F1: 0.9247 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 250 | 0.3272 | 0.9005 | 0.8990 | | 0.5503 | 2.0 | 500 | 0.2224 | 0.9245 | 0.9247 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
surya-narayanan/merged_model_r_32
surya-narayanan
2024-03-25T21:52:14Z
3
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-25T21:45:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hiba2/arabart_wiki
hiba2
2024-03-25T21:48:15Z
296
0
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "generated_from_trainer", "base_model:moussaKam/AraBART", "base_model:finetune:moussaKam/AraBART", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-03-25T21:47:49Z
--- license: apache-2.0 base_model: moussaKam/AraBART tags: - generated_from_trainer metrics: - rouge model-index: - name: arabart_wiki results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # arabart_wiki This model is a fine-tuned version of [moussaKam/AraBART](https://huggingface.co/moussaKam/AraBART) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0001 - Rouge1: 0.1109 - Rouge2: 0.009 - Rougel: 0.1109 - Rougelsum: 0.1105 - Gen Len: 19.9251 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 250 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 0.0048 | 7.35 | 500 | 0.0001 | 0.1109 | 0.009 | 0.1109 | 0.1105 | 19.9251 | ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
edumunozsala/gemma-7b-sft-legal-refugiados
edumunozsala
2024-03-25T21:43:14Z
77
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/gemma-7b-bnb-4bit", "base_model:finetune:unsloth/gemma-7b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "8-bit", "region:us" ]
text-generation
2024-03-25T21:40:38Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - trl - sft base_model: unsloth/gemma-7b-bnb-4bit --- # Uploaded model - **Developed by:** edumunozsala - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-7b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
HachiML/myBit-Llama2-jp-127M-6
HachiML
2024-03-25T21:42:10Z
5
0
transformers
[ "transformers", "safetensors", "bit_llama", "text-generation", "generated_from_trainer", "custom_code", "autotrain_compatible", "region:us" ]
text-generation
2024-03-25T14:58:35Z
--- tags: - generated_from_trainer model-index: - name: myBit-Llama2-jp-127M-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # myBit-Llama2-jp-127M-6 This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.5300 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0024 - train_batch_size: 96 - eval_batch_size: 96 - seed: 42 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5000 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 4.6845 | 0.05 | 2000 | 3.7571 | | 3.6263 | 0.1 | 4000 | 3.5463 | | 3.5645 | 0.15 | 6000 | 3.4975 | | 3.5418 | 0.2 | 8000 | 3.5903 | | 3.5333 | 0.25 | 10000 | 3.4952 | | 3.5572 | 0.29 | 12000 | 3.4898 | | 3.4671 | 0.34 | 14000 | 3.4466 | | 3.414 | 0.39 | 16000 | 3.4579 | | 3.4583 | 0.44 | 18000 | 3.4420 | | 3.4988 | 0.49 | 20000 | 3.5380 | | 3.5448 | 0.54 | 22000 | 3.4931 | | 3.4932 | 0.59 | 24000 | 3.4592 | | 3.5387 | 0.64 | 26000 | 3.5774 | | 3.6424 | 0.69 | 28000 | 4.0166 | | 3.8589 | 0.74 | 30000 | 3.7899 | | 3.7753 | 0.79 | 32000 | 3.7973 | | 3.7703 | 0.83 | 34000 | 3.7630 | | 3.7135 | 0.88 | 36000 | 3.6725 | | 3.6472 | 0.93 | 38000 | 3.5994 | | 3.5686 | 0.98 | 40000 | 3.5300 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
bartowski/stable-code-instruct-3b-GGUF
bartowski
2024-03-25T21:39:06Z
7,168
18
transformers
[ "transformers", "gguf", "causal-lm", "code", "text-generation", "en", "license:other", "model-index", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-03-25T21:34:43Z
--- license: other language: - en tags: - causal-lm - code metrics: - code_eval library_name: transformers model-index: - name: stabilityai/stable-code-instruct-3b results: - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (Python) metrics: - name: pass@1 type: pass@1 value: 32.4 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (C++) metrics: - name: pass@1 type: pass@1 value: 30.9 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (Java) metrics: - name: pass@1 type: pass@1 value: 32.1 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (JavaScript) metrics: - name: pass@1 type: pass@1 value: 32.1 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (PHP) metrics: - name: pass@1 type: pass@1 value: 24.2 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (Rust) metrics: - name: pass@1 type: pass@1 value: 23.0 verified: false quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp Quantizations of stable-code-instruct-3b Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2440">b2440</a> for quantization. Original model: https://huggingface.co/stabilityai/stable-code-instruct-3b Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [stable-code-instruct-3b-Q8_0.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-Q8_0.gguf) | Q8_0 | 2.97GB | Extremely high quality, generally unneeded but max available quant. | | [stable-code-instruct-3b-Q6_K.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-Q6_K.gguf) | Q6_K | 2.29GB | Very high quality, near perfect, *recommended*. | | [stable-code-instruct-3b-Q5_K_M.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-Q5_K_M.gguf) | Q5_K_M | 1.99GB | High quality, very usable. | | [stable-code-instruct-3b-Q5_K_S.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-Q5_K_S.gguf) | Q5_K_S | 1.94GB | High quality, very usable. | | [stable-code-instruct-3b-Q5_0.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-Q5_0.gguf) | Q5_0 | 1.94GB | High quality, older format, generally not recommended. | | [stable-code-instruct-3b-Q4_K_M.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-Q4_K_M.gguf) | Q4_K_M | 1.70GB | Good quality, similar to 4.25 bpw. | | [stable-code-instruct-3b-Q4_K_S.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-Q4_K_S.gguf) | Q4_K_S | 1.62GB | Slightly lower quality with small space savings. | | [stable-code-instruct-3b-IQ4_NL.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-IQ4_NL.gguf) | IQ4_NL | 1.61GB | Good quality, similar to Q4_K_S, new method of quanting, | | [stable-code-instruct-3b-IQ4_XS.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-IQ4_XS.gguf) | IQ4_XS | 1.53GB | Decent quality, new method with similar performance to Q4. | | [stable-code-instruct-3b-Q4_0.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-Q4_0.gguf) | Q4_0 | 1.60GB | Decent quality, older format, generally not recommended. | | [stable-code-instruct-3b-IQ3_M.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-IQ3_M.gguf) | IQ3_M | 1.31GB | Medium-low quality, new method with decent performance. | | [stable-code-instruct-3b-IQ3_S.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-IQ3_S.gguf) | IQ3_S | 1.25GB | Lower quality, new method with decent performance, recommended over Q3 quants. | | [stable-code-instruct-3b-Q3_K_L.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-Q3_K_L.gguf) | Q3_K_L | 1.50GB | Lower quality but usable, good for low RAM availability. | | [stable-code-instruct-3b-Q3_K_M.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-Q3_K_M.gguf) | Q3_K_M | 1.39GB | Even lower quality. | | [stable-code-instruct-3b-Q3_K_S.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-Q3_K_S.gguf) | Q3_K_S | 1.25GB | Low quality, not recommended. | | [stable-code-instruct-3b-Q2_K.gguf](https://huggingface.co/bartowski/stable-code-instruct-3b-GGUF/blob/main/stable-code-instruct-3b-Q2_K.gguf) | Q2_K | 1.08GB | Extremely low quality, *not* recommended. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
Teapack1/LoRA-TinyLlama-1.1B-Chat-v1.0-Chris-Williamson-chat
Teapack1
2024-03-25T21:33:55Z
3
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:apache-2.0", "region:us" ]
null
2024-03-25T20:54:58Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 model-index: - name: LoRA-TinyLlama-1.1B-Chat-v1.0-Chris-Williamson-chat results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA-TinyLlama-1.1B-Chat-v1.0-Chris-Williamson-chat This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.38.1 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
edumunozsala/adapter-gemma-7b-sft-legal-ref
edumunozsala
2024-03-25T21:33:45Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma", "trl", "en", "base_model:unsloth/gemma-7b-bnb-4bit", "base_model:finetune:unsloth/gemma-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-25T21:33:32Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - trl base_model: unsloth/gemma-7b-bnb-4bit --- # Uploaded model - **Developed by:** edumunozsala - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-7b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
blockblockblock/Code-Mistral-7B-bpw5
blockblockblock
2024-03-25T21:33:22Z
7
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "code", "mathematics", "conversational", "en", "dataset:ajibawa-2023/Code-290k-ShareGPT", "dataset:m-a-p/Code-Feedback", "dataset:microsoft/orca-math-word-problems-200k", "dataset:teknium/openhermes", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "5-bit", "exl2", "region:us" ]
text-generation
2024-03-25T21:31:30Z
--- license: apache-2.0 datasets: - ajibawa-2023/Code-290k-ShareGPT - m-a-p/Code-Feedback - microsoft/orca-math-word-problems-200k - teknium/openhermes language: - en tags: - code - mathematics --- **Code-Mistral-7B** This Model is trained on refined version of my dataset [Code-290k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-290k-ShareGPT). Besides this it is trained on following datasets: [Code-Feedback](https://huggingface.co/datasets/m-a-p/Code-Feedback) [orca-math-word-problems-200k](https://huggingface.co/datasets/microsoft/orca-math-word-problems-200k) [Openhermes](https://huggingface.co/datasets/teknium/openhermes) The idea was to check how this Model will perform with both Code & Maths datasets. This model is very good with Coding. Maths is still hit & miss but you can test out this model. This Model is trained on massive datasets so the results are very good. I have used ChatML prompt format. Kindly note this is qLoRA version, a rare exception. **Training:** Entire dataset was trained on 4 x A100 80GB. For 3 epoch, training took almost 33 Hours. Axolotl codebase was used for training purpose. Entire data is trained on Mistral. **Example Prompt:** This model uses **ChatML** prompt format. ``` <|im_start|>system You are a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` You can modify above Prompt as per your requirement. I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development. Thank you for your love & support. **Example Output** **C++** ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/jcmEZSRX7s7-B_ZybWwwN.jpeg) **Error Resolving** ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/iy89IxjiZXAY4Id-ieLg7.jpeg) **Matrices** ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/zFfq9lBA63wQzy0tP3_hd.jpeg) **Machine Learning** ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/Nv8dCpNxRtJGkOuulKzmn.jpeg)
FinancialSupport/saiga-7b
FinancialSupport
2024-03-25T21:31:58Z
4,199
3
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "it", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-28T16:31:58Z
--- language: - it license: apache-2.0 model-index: - name: saiga-7b results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 63.14 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FinancialSupport/saiga-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 83.14 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FinancialSupport/saiga-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 61.66 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FinancialSupport/saiga-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 54.99 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FinancialSupport/saiga-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.01 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FinancialSupport/saiga-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 45.11 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FinancialSupport/saiga-7b name: Open LLM Leaderboard --- il saiga è uno strano incrocio di antilopi che vive nelle steppe siberiane. Il nome deriva dal fatto che è un parente di fauno/camoscio e un lontano cugino di cerbero (altri modelli open source ita). E' un progetto portato avanti nei weekend con pochi soldi/tempo a disposizione ![image/png](https://cdn-uploads.huggingface.co/production/uploads/648cca46d38113f34bf7cb72/nqYw-P2uPLsNI8FMnLHtN.png) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_FinancialSupport__saiga-7b) | Metric |Value| |---------------------------------|----:| |Avg. |64.51| |AI2 Reasoning Challenge (25-Shot)|63.14| |HellaSwag (10-Shot) |83.14| |MMLU (5-Shot) |61.66| |TruthfulQA (0-shot) |54.99| |Winogrande (5-shot) |79.01| |GSM8k (5-shot) |45.11|
AlignmentResearch/robust_llm_pythia-tt-2.8b-mz-ada-v3-ch-112000
AlignmentResearch
2024-03-25T21:24:43Z
3
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-2.8b-deduped", "base_model:finetune:EleutherAI/pythia-2.8b-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T21:21:35Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-2.8b-deduped model-index: - name: robust_llm_pythia-tt-2.8b-mz-ada-v3-ch-112000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-2.8b-mz-ada-v3-ch-112000 This model is a fine-tuned version of [EleutherAI/pythia-2.8b-deduped](https://huggingface.co/EleutherAI/pythia-2.8b-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
Vikhrmodels/Vikhr-tiny-0.1
Vikhrmodels
2024-03-25T21:24:21Z
180
2
transformers
[ "transformers", "safetensors", "minicpm", "text-generation", "custom_code", "ru", "en", "zh", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
2024-02-28T08:58:12Z
--- license: apache-2.0 language: - ru - en - zh library_name: transformers --- DONT TOUCH, under dev |Task |Version| Metric |Value | |Stderr| |-----|------:|--------|-----:|---|-----:| |parus| 0|acc |0.4950|± |0.0250| |rcb | 0|acc |0.3333|± |0.0226| | | |f1_macro|0.1667| | | |rwsd | 0|acc |0.4901|± |0.0203| |mmlu| 0| 0.31|0.225| Based on https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16 https://wandb.ai/alexwortega/cpm_rus/runs/32w8pv7x?workspace=user-alexwortega lol
AlignmentResearch/robust_llm_pythia-tt-2.8b-mz-ada-v3-ch-100000
AlignmentResearch
2024-03-25T21:20:55Z
3
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-2.8b-deduped", "base_model:finetune:EleutherAI/pythia-2.8b-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T21:17:44Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-2.8b-deduped model-index: - name: robust_llm_pythia-tt-2.8b-mz-ada-v3-ch-100000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-2.8b-mz-ada-v3-ch-100000 This model is a fine-tuned version of [EleutherAI/pythia-2.8b-deduped](https://huggingface.co/EleutherAI/pythia-2.8b-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
0x0daughter1/gemma_gpc
0x0daughter1
2024-03-25T21:17:43Z
141
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-25T21:15:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
blockblockblock/Code-Mistral-7B-bpw4.6
blockblockblock
2024-03-25T21:08:01Z
3
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "code", "mathematics", "conversational", "en", "dataset:ajibawa-2023/Code-290k-ShareGPT", "dataset:m-a-p/Code-Feedback", "dataset:microsoft/orca-math-word-problems-200k", "dataset:teknium/openhermes", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-03-25T21:06:14Z
--- license: apache-2.0 datasets: - ajibawa-2023/Code-290k-ShareGPT - m-a-p/Code-Feedback - microsoft/orca-math-word-problems-200k - teknium/openhermes language: - en tags: - code - mathematics --- **Code-Mistral-7B** This Model is trained on refined version of my dataset [Code-290k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-290k-ShareGPT). Besides this it is trained on following datasets: [Code-Feedback](https://huggingface.co/datasets/m-a-p/Code-Feedback) [orca-math-word-problems-200k](https://huggingface.co/datasets/microsoft/orca-math-word-problems-200k) [Openhermes](https://huggingface.co/datasets/teknium/openhermes) The idea was to check how this Model will perform with both Code & Maths datasets. This model is very good with Coding. Maths is still hit & miss but you can test out this model. This Model is trained on massive datasets so the results are very good. I have used ChatML prompt format. Kindly note this is qLoRA version, a rare exception. **Training:** Entire dataset was trained on 4 x A100 80GB. For 3 epoch, training took almost 33 Hours. Axolotl codebase was used for training purpose. Entire data is trained on Mistral. **Example Prompt:** This model uses **ChatML** prompt format. ``` <|im_start|>system You are a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` You can modify above Prompt as per your requirement. I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development. Thank you for your love & support. **Example Output** **C++** ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/jcmEZSRX7s7-B_ZybWwwN.jpeg) **Error Resolving** ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/iy89IxjiZXAY4Id-ieLg7.jpeg) **Matrices** ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/zFfq9lBA63wQzy0tP3_hd.jpeg) **Machine Learning** ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/Nv8dCpNxRtJGkOuulKzmn.jpeg)
kavalry/q-Taxi-v3
kavalry
2024-03-25T21:07:18Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-03-25T21:06:18Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.44 +/- 2.70 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="kavalry/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
JFernandoGRE/mistral_7b_augmenteddemocracy_dups_all2_25
JFernandoGRE
2024-03-25T21:06:55Z
75
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-03-25T21:03:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Thalirajesh/Aerial-Drone-Image-Segmentation
Thalirajesh
2024-03-25T21:05:15Z
334
9
transformers
[ "transformers", "tensorboard", "safetensors", "segformer", "generated_from_trainer", "image-segmentation", "base_model:nvidia/mit-b0", "base_model:finetune:nvidia/mit-b0", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
2024-03-25T16:33:39Z
--- license: other tags: - generated_from_trainer base_model: nvidia/mit-b0 model-index: - name: Aerial-Drone-Image-Segmentation results: [] pipeline_tag: image-segmentation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Aerial-Drone-Image-Segmentation This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) It achieves the following results on the evaluation set: - Loss: 0.8852 - Mean Iou: 0.2994 - Mean Accuracy: 0.3923 - Overall Accuracy: 0.7774 ## Model description More information needed ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Evaluation Results {'mean_iou': 0.27989828118195953, 'mean_accuracy': 0.3712316062110093, 'overall_accuracy': 0.7671712239583334, 'per_category_iou': array([ nan, 0.8560476 , 0.32234631, 0.76880948, 0.57517691, 0.43877125, 0.00114888, 0.14091442, 0.51807365, 0.76964765, 0.27391949, 0. , 0. , 0. , 0. , 0.05778175, 0. , 0.45566807, 0. , 0.25864545, 0.48767764, 0. , 0.23313364, nan]), 'per_category_accuracy': array([ nan, 0.96170675, 0.43993514, 0.86977593, 0.8149788 , 0.49739671, 0.00114987, 0.14445379, 0.80978302, 0.88661108, 0.46787116, 0. , 0. , 0. , 0. , 0.05947339, 0. , 0.55639324, 0. , 0.38358184, 0.761303 , 0. , 0.51268161, nan])} ### Training results ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64fec5de57ccb8f1bdfbec54/nRUHIJAj52l3wxMTJARka.png) | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:| | 2.7923 | 1.25 | 20 | 2.8338 | 0.0954 | 0.1626 | 0.5529 | | 2.219 | 2.5 | 40 | 2.1391 | 0.1036 | 0.1666 | 0.5929 | | 1.9451 | 3.75 | 60 | 1.7919 | 0.1154 | 0.1782 | 0.6129 | | 1.7558 | 5.0 | 80 | 1.6767 | 0.1300 | 0.1961 | 0.6396 | | 1.6381 | 6.25 | 100 | 1.5817 | 0.1383 | 0.2055 | 0.6550 | | 1.5338 | 7.5 | 120 | 1.4816 | 0.1464 | 0.2140 | 0.6729 | | 1.4478 | 8.75 | 140 | 1.4231 | 0.1529 | 0.2219 | 0.6823 | | 1.361 | 10.0 | 160 | 1.3300 | 0.1637 | 0.2315 | 0.6975 | | 1.306 | 11.25 | 180 | 1.3034 | 0.1737 | 0.2419 | 0.7060 | | 1.2611 | 12.5 | 200 | 1.2692 | 0.1755 | 0.2450 | 0.7093 | | 1.2317 | 13.75 | 220 | 1.2190 | 0.1821 | 0.2501 | 0.7145 | | 1.1868 | 15.0 | 240 | 1.2063 | 0.1862 | 0.2539 | 0.7188 | | 1.1628 | 16.25 | 260 | 1.1832 | 0.1909 | 0.2612 | 0.7234 | | 1.1149 | 17.5 | 280 | 1.1368 | 0.2048 | 0.2739 | 0.7317 | | 1.1009 | 18.75 | 300 | 1.1117 | 0.2232 | 0.2938 | 0.7387 | | 1.0532 | 20.0 | 320 | 1.0923 | 0.2315 | 0.2997 | 0.7414 | | 1.0464 | 21.25 | 340 | 1.0821 | 0.2408 | 0.3147 | 0.7480 | | 1.0278 | 22.5 | 360 | 1.0541 | 0.2517 | 0.3277 | 0.7530 | | 0.9945 | 23.75 | 380 | 1.0352 | 0.2612 | 0.3398 | 0.7573 | | 0.9729 | 25.0 | 400 | 1.0207 | 0.2671 | 0.3511 | 0.7609 | | 0.9527 | 26.25 | 420 | 1.0067 | 0.2684 | 0.3547 | 0.7609 | | 0.9494 | 27.5 | 440 | 0.9870 | 0.2713 | 0.3548 | 0.7627 | | 0.9287 | 28.75 | 460 | 0.9729 | 0.2745 | 0.3619 | 0.7640 | | 0.9089 | 30.0 | 480 | 0.9561 | 0.2791 | 0.3640 | 0.7680 | | 0.9064 | 31.25 | 500 | 0.9500 | 0.2799 | 0.3712 | 0.7672 | | 0.8681 | 32.5 | 520 | 0.9397 | 0.2845 | 0.3749 | 0.7696 | | 0.8677 | 33.75 | 540 | 0.9340 | 0.2835 | 0.3737 | 0.7692 | | 0.8663 | 35.0 | 560 | 0.9243 | 0.2862 | 0.3755 | 0.7716 | | 0.8629 | 36.25 | 580 | 0.9173 | 0.2869 | 0.3766 | 0.7719 | | 0.8542 | 37.5 | 600 | 0.9112 | 0.2908 | 0.3810 | 0.7740 | | 0.8391 | 38.75 | 620 | 0.9050 | 0.2904 | 0.3812 | 0.7734 | | 0.8392 | 40.0 | 640 | 0.9027 | 0.2917 | 0.3818 | 0.7734 | | 0.8306 | 41.25 | 660 | 0.8949 | 0.2941 | 0.3841 | 0.7755 | | 0.8213 | 42.5 | 680 | 0.8936 | 0.2958 | 0.3875 | 0.7760 | | 0.8406 | 43.75 | 700 | 0.8910 | 0.2964 | 0.3879 | 0.7763 | | 0.8254 | 45.0 | 720 | 0.8889 | 0.2981 | 0.3897 | 0.7764 | | 0.8202 | 46.25 | 740 | 0.8880 | 0.2985 | 0.3917 | 0.7767 | | 0.8013 | 47.5 | 760 | 0.8891 | 0.2989 | 0.3923 | 0.7767 | | 0.8188 | 48.75 | 780 | 0.8861 | 0.2994 | 0.3926 | 0.7772 | | 0.8089 | 50.0 | 800 | 0.8852 | 0.2994 | 0.3923 | 0.7774 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.2 - Datasets 2.1.0 - Tokenizers 0.15.2
strangervb/Llama-2-70B-Chat-GPTQ-2
strangervb
2024-03-25T21:03:11Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-2", "en", "arxiv:2307.09288", "base_model:meta-llama/Llama-2-70b-chat-hf", "base_model:quantized:meta-llama/Llama-2-70b-chat-hf", "license:llama2", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2024-03-22T04:52:44Z
--- base_model: meta-llama/Llama-2-70b-chat-hf inference: false language: - en license: llama2 model_creator: Meta Llama 2 model_name: Llama 2 70B Chat model_type: llama pipeline_tag: text-generation prompt_template: '[INST] <<SYS>> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don''t know the answer to a question, please don''t share false information. <</SYS>> {prompt}[/INST] ' quantized_by: TheBloke tags: - facebook - meta - pytorch - llama - llama-2 --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Llama 2 70B Chat - GPTQ - Model creator: [Meta Llama 2](https://huggingface.co/meta-llama) - Original model: [Llama 2 70B Chat](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) <!-- description start --> ## Description This repo contains GPTQ model files for [Meta Llama 2's Llama 2 70B Chat](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-70B-chat-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-70B-chat-GGUF) * [Meta Llama 2's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Llama-2-Chat ``` [INST] <<SYS>> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> {prompt}[/INST] ``` <!-- prompt-template end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/main) | 4 | None | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 35.33 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 40.66 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 36.65 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-3bit-64g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-3bit-64g-actorder_True) | 3 | 64 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 29.30 GB | No | 3-bit, with group size 64g and act-order. | | [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 28.03 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. | | [gptq-3bit-128g-actorder_False](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-3bit-128g-actorder_False) | 3 | 128 | No | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 28.03 GB | No | 3-bit, with group size 128g but no act-order. Slightly higher VRAM requirements than 3-bit None. | | [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 26.78 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 37.99 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download from branches - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Llama-2-70B-chat-GPTQ:main` - With Git, you can clone a branch with: ``` git clone --single-branch --branch main https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ ``` - In Python Transformers code, the branch is the `revision` parameter; see below. <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Llama-2-70B-chat-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Llama-2-70B-chat-GPTQ:main` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Llama-2-70B-chat-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers>=4.32.0 optimum>=1.12.0 pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ pip3 install . ``` ### For CodeLlama models only: you must use Transformers 4.33.0 or later. If 4.33.0 is not yet released when you read this, you will need to install Transformers from source: ```shell pip3 uninstall -y transformers pip3 install git+https://github.com/huggingface/transformers.git ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/Llama-2-70B-chat-GPTQ" # To use a different branch, change revision # For example: revision="main" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''[INST] <<SYS>> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> {prompt}[/INST] ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Meta Llama 2's Llama 2 70B Chat # **Llama 2** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 70B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software “bug,” or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)| |70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
nthakur/mistral-7b-v0.2-sft-mix-23rd-mar-v0
nthakur
2024-03-25T20:57:41Z
2
0
peft
[ "peft", "safetensors", "mistral", "alignment-handbook", "trl", "sft", "generated_from_trainer", "dataset:nthakur/deita-10k-v0-instruct", "dataset:nthakur/Bactrian-X-23-lang-instruct", "dataset:nthakur/GSM8KInstruct-Parallel-instruct", "dataset:nthakur/ultrachat-200k-instruct", "base_model:unsloth/mistral-7b-v0.2", "base_model:adapter:unsloth/mistral-7b-v0.2", "license:apache-2.0", "region:us" ]
null
2024-03-24T04:18:27Z
--- license: apache-2.0 library_name: peft tags: - alignment-handbook - trl - sft - generated_from_trainer - trl - sft - generated_from_trainer datasets: - nthakur/deita-10k-v0-instruct - nthakur/Bactrian-X-23-lang-instruct - nthakur/GSM8KInstruct-Parallel-instruct - nthakur/ultrachat-200k-instruct base_model: unsloth/mistral-7b-v0.2 model-index: - name: mistral-7b-v0.2-sft-mix-23rd-mar-v0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-7b-v0.2-sft-mix-23rd-mar-v0 This model is a fine-tuned version of [unsloth/mistral-7b-v0.2](https://huggingface.co/unsloth/mistral-7b-v0.2) on the nthakur/deita-10k-v0-instruct, the nthakur/Bactrian-X-23-lang-instruct, the nthakur/GSM8KInstruct-Parallel-instruct and the nthakur/ultrachat-200k-instruct datasets. It achieves the following results on the evaluation set: - Loss: 0.9631 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 3 - total_train_batch_size: 48 - total_eval_batch_size: 48 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.918 | 1.0 | 5170 | 0.9631 | ### Framework versions - PEFT 0.7.1 - Transformers 4.39.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
AlignmentResearch/robust_llm_pythia-tt-1.4b-mz-ada-v3-ch-129000
AlignmentResearch
2024-03-25T20:56:40Z
103
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-1.4b-deduped", "base_model:finetune:EleutherAI/pythia-1.4b-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T20:53:51Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-1.4b-deduped model-index: - name: robust_llm_pythia-tt-1.4b-mz-ada-v3-ch-129000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-1.4b-mz-ada-v3-ch-129000 This model is a fine-tuned version of [EleutherAI/pythia-1.4b-deduped](https://huggingface.co/EleutherAI/pythia-1.4b-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
blockblockblock/Code-Mistral-7B-bpw4.4
blockblockblock
2024-03-25T20:55:26Z
3
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "code", "mathematics", "conversational", "en", "dataset:ajibawa-2023/Code-290k-ShareGPT", "dataset:m-a-p/Code-Feedback", "dataset:microsoft/orca-math-word-problems-200k", "dataset:teknium/openhermes", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-03-25T20:53:47Z
--- license: apache-2.0 datasets: - ajibawa-2023/Code-290k-ShareGPT - m-a-p/Code-Feedback - microsoft/orca-math-word-problems-200k - teknium/openhermes language: - en tags: - code - mathematics --- **Code-Mistral-7B** This Model is trained on refined version of my dataset [Code-290k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-290k-ShareGPT). Besides this it is trained on following datasets: [Code-Feedback](https://huggingface.co/datasets/m-a-p/Code-Feedback) [orca-math-word-problems-200k](https://huggingface.co/datasets/microsoft/orca-math-word-problems-200k) [Openhermes](https://huggingface.co/datasets/teknium/openhermes) The idea was to check how this Model will perform with both Code & Maths datasets. This model is very good with Coding. Maths is still hit & miss but you can test out this model. This Model is trained on massive datasets so the results are very good. I have used ChatML prompt format. Kindly note this is qLoRA version, a rare exception. **Training:** Entire dataset was trained on 4 x A100 80GB. For 3 epoch, training took almost 33 Hours. Axolotl codebase was used for training purpose. Entire data is trained on Mistral. **Example Prompt:** This model uses **ChatML** prompt format. ``` <|im_start|>system You are a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` You can modify above Prompt as per your requirement. I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development. Thank you for your love & support. **Example Output** **C++** ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/jcmEZSRX7s7-B_ZybWwwN.jpeg) **Error Resolving** ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/iy89IxjiZXAY4Id-ieLg7.jpeg) **Matrices** ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/zFfq9lBA63wQzy0tP3_hd.jpeg) **Machine Learning** ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/Nv8dCpNxRtJGkOuulKzmn.jpeg)
AlignmentResearch/robust_llm_pythia-tt-1.4b-mz-ada-v3-ch-141000
AlignmentResearch
2024-03-25T20:53:29Z
103
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-1.4b-deduped", "base_model:finetune:EleutherAI/pythia-1.4b-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T20:50:30Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-1.4b-deduped model-index: - name: robust_llm_pythia-tt-1.4b-mz-ada-v3-ch-141000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-1.4b-mz-ada-v3-ch-141000 This model is a fine-tuned version of [EleutherAI/pythia-1.4b-deduped](https://huggingface.co/EleutherAI/pythia-1.4b-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
Smuggling1710/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp
Smuggling1710
2024-03-25T20:52:27Z
6
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "Smuggling1710/ErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp", "ChaoticNeutrals/Kool-Aid_7B", "base_model:ChaoticNeutrals/Kool-Aid_7B", "base_model:merge:ChaoticNeutrals/Kool-Aid_7B", "base_model:Smuggling1710/ErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp", "base_model:merge:Smuggling1710/ErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-25T20:47:38Z
--- tags: - merge - mergekit - lazymergekit - Smuggling1710/ErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp - ChaoticNeutrals/Kool-Aid_7B base_model: - Smuggling1710/ErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp - ChaoticNeutrals/Kool-Aid_7B --- # KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Smuggling1710/ErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp](https://huggingface.co/Smuggling1710/ErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp) * [ChaoticNeutrals/Kool-Aid_7B](https://huggingface.co/ChaoticNeutrals/Kool-Aid_7B) ## 🧩 Configuration ```yaml slices: - sources: - model: Smuggling1710/ErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp layer_range: [0, 32] - model: ChaoticNeutrals/Kool-Aid_7B layer_range: [0, 32] merge_method: slerp base_model: Smuggling1710/ErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Smuggling1710/KAErisepBeagleNuBuRPInfinWestLakev2-ENDLESSIreneRP-Neural-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
bikram22pi7/gpt2-thiruvalluvar-model
bikram22pi7
2024-03-25T20:48:27Z
144
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "dataset:bikram22pi7/Thiruvalluvar_Thirukkural", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-25T20:35:45Z
--- library_name: transformers datasets: - bikram22pi7/Thiruvalluvar_Thirukkural --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
richiebailey/subpar0_sdxl
richiebailey
2024-03-25T20:45:40Z
0
0
diffusers
[ "diffusers", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-03-25T19:13:30Z
--- library_name: diffusers ---
Konstantinos/el_llama_smol
Konstantinos
2024-03-25T20:42:36Z
42
4
transformers
[ "transformers", "pytorch", "llama", "text-generation", "text-generation-inference", "el", "license:odc-by", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-03-12T10:00:12Z
--- license: odc-by language: - el widget: - text: "Η Ιαπωνία έχει μια ιστορία που ξεκινά πριν από χιλιάδες χρόνια. Οι επιστήμονες πιστεύουν πως οι Ιάπωνες ως ενιαίο σύνολο προέρχονται από πολλές ομάδες, οι οποίες μετανάστευσαν στα νησιά από άλλα σημεία της Ασίας, στα οποία περιλαμβάνονται " tags: - text-generation-inference --- --- language: el --- # el-llama-smol ## Model: `el-llama-smol` aims to be the first in a series of LLMs trained mostly in Greek corpora. The model is a small (1bn parameters) version of LLama, with the following configuration. ```json { "architectures": ["LLaMAForCausalLM"], "bos_token_id": 0, "eos_token_id": 1, "hidden_act": "silu", "hidden_size": 2048, "intermediate_size": 5461, "initializer_range": 0.02, "max_sequence_length": 1024, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 24, "pad_token_id": -1, "rms_norm_eps": 1e-06, "transformers_version": "4.28.1", "use_cache": true, "vocab_size": 22000 } ``` ## Training details: The current snapshot has been trained for 40hrs with an RTX A6000 GPU (48G), using the `galore_adamw8bit_per_layer` optimizer by Zhao et. al [1] and a context size of 1024 tokens. ## Dataset: The model is trained on the Greek subset of the [allenai/c4](https://huggingface.co/datasets/allenai/c4) dataset. Text tokenization is performed with a (heavily unoptimized) tokenizer with vocab size of 22000 tokens, trained with [SentencePiece](https://github.com/google/sentencepiece) ## Examples #### Use a 🤗 pipeline ```python from transformers import pipeline pipe = pipeline("text-generation", model="Konstantinos/el_llama_smol") set_seed(1) prompt = """Η Ιαπωνία έχει μια ιστορία που ξεκινά πριν από χιλιάδες χρόνια. Οι επιστήμονες πιστεύουν πως οι Ιάπωνες ως ενιαίο σύνολο προέρχονται από πολλές ομάδες, οι οποίες μετανάστευσαν στα νησιά από άλλα σημεία της Ασίας, στα οποία περιλαμβάνονται """ ret = pipe(prompt, do_sample=True, top_k=20, temperature=0.85, max_new_tokens=110) ``` #### Load model directly ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Konstantinos/el_llama_smol") model = AutoModelForCausalLM.from_pretrained("Konstantinos/el_llama_smol") ``` ## References [1] Jiawei Zhao, Zhenyu Zhang, Beidi Chen, Zhangyang Wang, Anima Anandkumar, & Yuandong Tian. (2024). GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection. ## Citation TBD --- license: odc-by -
cvzion/lora-MISTRAL-dqg-2024-03-25
cvzion
2024-03-25T20:37:11Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:NousResearch/Hermes-2-Pro-Mistral-7B", "base_model:finetune:NousResearch/Hermes-2-Pro-Mistral-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-25T20:37:01Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: NousResearch/Hermes-2-Pro-Mistral-7B --- # Uploaded model - **Developed by:** cvzion - **License:** apache-2.0 - **Finetuned from model :** NousResearch/Hermes-2-Pro-Mistral-7B This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
AlignmentResearch/robust_llm_pythia-spam-160m-mz-ada-v3-nd
AlignmentResearch
2024-03-25T20:36:09Z
103
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-160m", "base_model:finetune:EleutherAI/pythia-160m", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T20:35:41Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-160m model-index: - name: robust_llm_pythia-spam-160m-mz-ada-v3-nd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-spam-160m-mz-ada-v3-nd This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
hlia981/AAS-dependencies
hlia981
2024-03-25T20:32:19Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-03-25T20:29:17Z
--- license: apache-2.0 --- The pre-trained models for AAS, includes: uniformerV2,Yolov8x and LSTM-2
blockblockblock/Code-Mistral-7B-bpw4
blockblockblock
2024-03-25T20:30:44Z
5
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "code", "mathematics", "conversational", "en", "dataset:ajibawa-2023/Code-290k-ShareGPT", "dataset:m-a-p/Code-Feedback", "dataset:microsoft/orca-math-word-problems-200k", "dataset:teknium/openhermes", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "exl2", "region:us" ]
text-generation
2024-03-25T20:29:08Z
--- license: apache-2.0 datasets: - ajibawa-2023/Code-290k-ShareGPT - m-a-p/Code-Feedback - microsoft/orca-math-word-problems-200k - teknium/openhermes language: - en tags: - code - mathematics --- **Code-Mistral-7B** This Model is trained on refined version of my dataset [Code-290k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-290k-ShareGPT). Besides this it is trained on following datasets: [Code-Feedback](https://huggingface.co/datasets/m-a-p/Code-Feedback) [orca-math-word-problems-200k](https://huggingface.co/datasets/microsoft/orca-math-word-problems-200k) [Openhermes](https://huggingface.co/datasets/teknium/openhermes) The idea was to check how this Model will perform with both Code & Maths datasets. This model is very good with Coding. Maths is still hit & miss but you can test out this model. This Model is trained on massive datasets so the results are very good. I have used ChatML prompt format. Kindly note this is qLoRA version, a rare exception. **Training:** Entire dataset was trained on 4 x A100 80GB. For 3 epoch, training took almost 33 Hours. Axolotl codebase was used for training purpose. Entire data is trained on Mistral. **Example Prompt:** This model uses **ChatML** prompt format. ``` <|im_start|>system You are a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` You can modify above Prompt as per your requirement. I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development. Thank you for your love & support. **Example Output** **C++** ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/jcmEZSRX7s7-B_ZybWwwN.jpeg) **Error Resolving** ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/iy89IxjiZXAY4Id-ieLg7.jpeg) **Matrices** ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/zFfq9lBA63wQzy0tP3_hd.jpeg) **Machine Learning** ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/Nv8dCpNxRtJGkOuulKzmn.jpeg)
RupeshKataria/mistral_7b_guanaco
RupeshKataria
2024-03-25T20:28:18Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-25T20:27:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AlignmentResearch/robust_llm_pythia-tt-1.4b-mz-ada-v3-ch-138000
AlignmentResearch
2024-03-25T20:26:25Z
103
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-1.4b-deduped", "base_model:finetune:EleutherAI/pythia-1.4b-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T20:23:03Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-1.4b-deduped model-index: - name: robust_llm_pythia-tt-1.4b-mz-ada-v3-ch-138000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-1.4b-mz-ada-v3-ch-138000 This model is a fine-tuned version of [EleutherAI/pythia-1.4b-deduped](https://huggingface.co/EleutherAI/pythia-1.4b-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
hallisky/cds_style_classifier
hallisky
2024-03-25T20:26:11Z
105
1
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T20:05:17Z
--- license: apache-2.0 --- # Citing this work If you use/reference this work, please cite us with: ``` @inproceedings{hallinan-etal-2023-steer, title = "{STEER}: Unified Style Transfer with Expert Reinforcement", author = "Hallinan, Skyler and Brahman, Faeze and Lu, Ximing and Jung, Jaehun and Welleck, Sean and Choi, Yejin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.506", doi = "10.18653/v1/2023.findings-emnlp.506", pages = "7546--7562", abstract = "While text style transfer has many applications across natural language processing, the core premise of transferring from a single source style is unrealistic in a real-world setting. In this work, we focus on arbitrary style transfer: rewriting a text from an arbitrary, unknown style to a target style. We propose STEER: Unified Style Transfer with Expert Reinforcement, a unified frame-work developed to overcome the challenge of limited parallel data for style transfer. STEER involves automatically generating a corpus of style-transfer pairs using a product of experts during decoding. The generated offline data is then used to pre-train an initial policy before switching to online, off-policy reinforcement learning for further improvements via fine-grained reward signals. STEER is unified and can transfer to multiple target styles from an arbitrary, unknown source style, making it particularly flexible and efficient. Experimental results on a challenging dataset with text from a diverse set of styles demonstrate state-of-the-art results compared to competitive baselines. Remarkably, STEER outperforms the 175B parameter instruction-tuned GPT-3 on overall style transfer quality, despite being 226 times smaller in size. We also show STEER is robust, maintaining its style transfer capabilities on out-of-domain data, and surpassing nearly all baselines across various styles. The success of our method highlights the potential of RL algorithms when augmented with controllable decoding to overcome the challenge of limited data supervision.", } ```
AlignmentResearch/robust_llm_pythia-tt-1.4b-mz-ada-v3-ch-109000
AlignmentResearch
2024-03-25T20:22:03Z
106
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-1.4b-deduped", "base_model:finetune:EleutherAI/pythia-1.4b-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T20:18:53Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-1.4b-deduped model-index: - name: robust_llm_pythia-tt-1.4b-mz-ada-v3-ch-109000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-1.4b-mz-ada-v3-ch-109000 This model is a fine-tuned version of [EleutherAI/pythia-1.4b-deduped](https://huggingface.co/EleutherAI/pythia-1.4b-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
AlignmentResearch/robust_llm_pythia-tt-1.4b-mz-ada-v3-ch-120000
AlignmentResearch
2024-03-25T20:20:40Z
103
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-1.4b-deduped", "base_model:finetune:EleutherAI/pythia-1.4b-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-25T20:17:56Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-1.4b-deduped model-index: - name: robust_llm_pythia-tt-1.4b-mz-ada-v3-ch-120000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-tt-1.4b-mz-ada-v3-ch-120000 This model is a fine-tuned version of [EleutherAI/pythia-1.4b-deduped](https://huggingface.co/EleutherAI/pythia-1.4b-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
blockblockblock/Code-Mistral-7B-bpw3.7
blockblockblock
2024-03-25T20:18:32Z
3
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "code", "mathematics", "conversational", "en", "dataset:ajibawa-2023/Code-290k-ShareGPT", "dataset:m-a-p/Code-Feedback", "dataset:microsoft/orca-math-word-problems-200k", "dataset:teknium/openhermes", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-03-25T20:16:56Z
--- license: apache-2.0 datasets: - ajibawa-2023/Code-290k-ShareGPT - m-a-p/Code-Feedback - microsoft/orca-math-word-problems-200k - teknium/openhermes language: - en tags: - code - mathematics --- **Code-Mistral-7B** This Model is trained on refined version of my dataset [Code-290k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-290k-ShareGPT). Besides this it is trained on following datasets: [Code-Feedback](https://huggingface.co/datasets/m-a-p/Code-Feedback) [orca-math-word-problems-200k](https://huggingface.co/datasets/microsoft/orca-math-word-problems-200k) [Openhermes](https://huggingface.co/datasets/teknium/openhermes) The idea was to check how this Model will perform with both Code & Maths datasets. This model is very good with Coding. Maths is still hit & miss but you can test out this model. This Model is trained on massive datasets so the results are very good. I have used ChatML prompt format. Kindly note this is qLoRA version, a rare exception. **Training:** Entire dataset was trained on 4 x A100 80GB. For 3 epoch, training took almost 33 Hours. Axolotl codebase was used for training purpose. Entire data is trained on Mistral. **Example Prompt:** This model uses **ChatML** prompt format. ``` <|im_start|>system You are a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` You can modify above Prompt as per your requirement. I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development. Thank you for your love & support. **Example Output** **C++** ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/jcmEZSRX7s7-B_ZybWwwN.jpeg) **Error Resolving** ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/iy89IxjiZXAY4Id-ieLg7.jpeg) **Matrices** ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/zFfq9lBA63wQzy0tP3_hd.jpeg) **Machine Learning** ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/Nv8dCpNxRtJGkOuulKzmn.jpeg)