modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
yanex0/realistic-stock-photo
yanex0
2024-02-24T20:12:44Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-02-24T19:39:26Z
--- license: creativeml-openrail-m ---
gaurav16/mycopilot
gaurav16
2024-02-24T19:58:56Z
7
1
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2024-01-27T02:56:35Z
# Model Card for Personal Copilot ## Model Details ### Model Description Gaurav Sinha's personal copilot, built using CodeLlama, is a versatile AI assistant for software development. Operating within VSCode, it offers code suggestions, generates test cases, executes shell commands, and provides editing capabilities, enhancing the development workflow. - **Developed by:** Gaurav Sinha - **Model type:** Generative AI - **Language(s):** Python, JavaScript - **License:** [Insert License] ### Model Sources - **Repository:** https://huggingface.co/gaurav16/mycopilot/tree/main ## Uses ### Direct Use The copilot can be directly used in VSCode to assist with coding tasks, including generating test cases, offering code suggestions, executing shell commands, and providing editing capabilities. ### Downstream Use This model can be fine-tuned for specific tasks or integrated into larger AI systems for more complex applications. ### Out-of-Scope Use The model is not intended for malicious use, and its capabilities are limited to providing assistance with software development tasks. ## Bias, Risks, and Limitations [Insert information about potential bias, risks, and limitations of the model.]
josephleekl/ppo-LunarLander-v2
josephleekl
2024-02-24T19:57:04Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-24T19:56:46Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 265.21 +/- 20.82 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
naonao0715/backward_Mxy_0
naonao0715
2024-02-24T19:55:19Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-24T17:53:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
djomo/MISTRALllux2000-7b-v8
djomo
2024-02-24T19:46:50Z
7
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-24T19:39:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
archiMAD/ppo-Pyramids
archiMAD
2024-02-24T19:43:44Z
8
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2024-02-24T19:43:40Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐢 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: archiMAD/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play πŸ‘€
badokorach/distilbert-base-cased-distilled-agric-trans-2402241
badokorach
2024-02-24T19:37:12Z
61
0
transformers
[ "transformers", "tf", "distilbert", "question-answering", "generated_from_keras_callback", "base_model:badokorach/distilbert-base-cased-distilled-agric-trans-240224", "base_model:finetune:badokorach/distilbert-base-cased-distilled-agric-trans-240224", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-02-24T19:22:33Z
--- license: apache-2.0 base_model: badokorach/distilbert-base-cased-distilled-agric-trans-240224 tags: - generated_from_keras_callback model-index: - name: badokorach/distilbert-base-cased-distilled-agric-trans-2402241 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # badokorach/distilbert-base-cased-distilled-agric-trans-2402241 This model is a fine-tuned version of [badokorach/distilbert-base-cased-distilled-agric-trans-240224](https://huggingface.co/badokorach/distilbert-base-cased-distilled-agric-trans-240224) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2119 - Validation Loss: 0.0 - Epoch: 14 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 690, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.02} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.8415 | 0.0 | 0 | | 0.7094 | 0.0 | 1 | | 0.6198 | 0.0 | 2 | | 0.5324 | 0.0 | 3 | | 0.4700 | 0.0 | 4 | | 0.4164 | 0.0 | 5 | | 0.3747 | 0.0 | 6 | | 0.3353 | 0.0 | 7 | | 0.3137 | 0.0 | 8 | | 0.2898 | 0.0 | 9 | | 0.2684 | 0.0 | 10 | | 0.2474 | 0.0 | 11 | | 0.2469 | 0.0 | 12 | | 0.2305 | 0.0 | 13 | | 0.2119 | 0.0 | 14 | ### Framework versions - Transformers 4.37.2 - TensorFlow 2.15.0 - Datasets 2.17.1 - Tokenizers 0.15.2
suhaaspk/suhaasface
suhaaspk
2024-02-24T19:34:25Z
28
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-02-24T19:28:36Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### SuhaasFace Dreambooth model trained by suhaaspk with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
jesuzn/LunarLander-v2
jesuzn
2024-02-24T19:15:44Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-24T19:09:07Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 265.83 +/- 25.80 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
kalobiralo/t5-grammar-model
kalobiralo
2024-02-24T19:03:29Z
107
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-24T19:02:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
shaheryar48/roberta_fine_tuned
shaheryar48
2024-02-24T18:58:07Z
163
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-24T18:56:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
macadeliccc/MonarchCorso-7B
macadeliccc
2024-02-24T18:57:28Z
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:macadeliccc/MBX-7B-v3-DPO", "base_model:merge:macadeliccc/MBX-7B-v3-DPO", "base_model:mlabonne/AlphaMonarch-7B", "base_model:merge:mlabonne/AlphaMonarch-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-23T04:51:54Z
--- base_model: - macadeliccc/MBX-7B-v3-DPO - mlabonne/AlphaMonarch-7B library_name: transformers tags: - mergekit - merge --- # MonarchCorso-7B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [macadeliccc/MBX-7B-v3-DPO](https://huggingface.co/macadeliccc/MBX-7B-v3-DPO) * [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: mlabonne/AlphaMonarch-7B layer_range: [0, 32] - model: macadeliccc/MBX-7B-v3-DPO layer_range: [0, 32] merge_method: slerp base_model: mlabonne/AlphaMonarch-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
bitadin/medium-title-latest
bitadin
2024-02-24T18:48:47Z
13
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-22T10:03:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mi-rei/CT_clinical-longformer_II_efficient_10e_pass2
mi-rei
2024-02-24T18:44:31Z
119
0
transformers
[ "transformers", "safetensors", "longformer", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-24T18:43:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
badokorach/distilbert-base-cased-distilled-agric-trans-240224
badokorach
2024-02-24T18:32:09Z
60
0
transformers
[ "transformers", "tf", "distilbert", "question-answering", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-cased-distilled-squad", "base_model:finetune:distilbert/distilbert-base-cased-distilled-squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-02-24T18:02:21Z
--- license: apache-2.0 base_model: distilbert-base-cased-distilled-squad tags: - generated_from_keras_callback model-index: - name: badokorach/distilbert-base-cased-distilled-agric-trans-240224 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # badokorach/distilbert-base-cased-distilled-agric-trans-240224 This model is a fine-tuned version of [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.8057 - Validation Loss: 0.0 - Epoch: 14 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'module': 'transformers.optimization_tf', 'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 690, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.8999999761581421, 'beta_2': 0.9990000128746033, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.02}, 'registered_name': 'AdamWeightDecay'}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.8831 | 0.0 | 0 | | 2.1002 | 0.0 | 1 | | 1.8590 | 0.0 | 2 | | 1.6544 | 0.0 | 3 | | 1.4724 | 0.0 | 4 | | 1.3464 | 0.0 | 5 | | 1.2419 | 0.0 | 6 | | 1.1152 | 0.0 | 7 | | 1.0354 | 0.0 | 8 | | 0.9869 | 0.0 | 9 | | 0.9086 | 0.0 | 10 | | 0.8639 | 0.0 | 11 | | 0.8258 | 0.0 | 12 | | 0.7978 | 0.0 | 13 | | 0.8057 | 0.0 | 14 | ### Framework versions - Transformers 4.37.2 - TensorFlow 2.15.0 - Datasets 2.17.1 - Tokenizers 0.15.2
ZEECO1/CancerLLM-Mistral7b
ZEECO1
2024-02-24T18:04:42Z
0
2
null
[ "safetensors", "region:us" ]
null
2024-02-24T17:53:48Z
import torch from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig base_model_id = "mistralai/Mistral-7B-v0.1" bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) base_model = AutoModelForCausalLM.from_pretrained( base_model_id, # Mistral, same as before quantization_config=bnb_config, # Same quantization config as before device_map="auto", trust_remote_code=True, ) eval_tokenizer = AutoTokenizer.from_pretrained(base_model_id, add_bos_token=True, trust_remote_code=True) from peft import PeftModel ft_model = PeftModel.from_pretrained(base_model, "ZEECO1/CancerLLM-Mistral7b/checkpoint-500") eval_prompt = " what are the drugs against lung cancer: # " model_input = eval_tokenizer(eval_prompt, return_tensors="pt").to("cuda") ft_model.eval() with torch.no_grad(): print(eval_tokenizer.decode(ft_model.generate(**model_input, max_new_tokens=100, repetition_penalty=1.15)[0], skip_special_tokens=True))
LoneStriker/OpenCodeInterpreter-DS-33B-8.0bpw-h8-exl2
LoneStriker
2024-02-24T17:52:45Z
5
2
transformers
[ "transformers", "pytorch", "llama", "text-generation", "code", "en", "arxiv:2402.14658", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-24T17:39:03Z
--- language: - en pipeline_tag: text-generation tags: - code --- <h1 align="center"> OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement<h1> <p align="center"> <img width="1000px" alt="OpenCodeInterpreter" src="https://opencodeinterpreter.github.io/static/images/figure1.png"> </p> <p align="center"> <a href="https://opencodeinterpreter.github.io/">[🏠Homepage]</a> | <a href="https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/">[πŸ› οΈCode]</a> </p> <hr> ## Introduction OpenCodeInterpreter is a family of open-source code generation systems designed to bridge the gap between large language models and advanced proprietary systems like the GPT-4 Code Interpreter. It significantly advances code generation capabilities by integrating execution and iterative refinement functionalities. For further information and related work, refer to our paper: ["OpenCodeInterpreter: A System for Enhanced Code Generation and Execution"](https://arxiv.org/abs/2402.14658) available on arXiv. ## Model Usage ### Inference ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_path="OpenCodeInterpreter-DS-33B" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.bfloat16, device_map="auto", ) model.eval() prompt = "Write a function to find the shared elements from the given two lists." inputs = tokenizer.apply_chat_template( [{'role': 'user', 'content': prompt }], return_tensors="pt" ).to(model.device) outputs = model.generate( inputs, max_new_tokens=1024, do_sample=False, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) ``` ## Contact If you have any inquiries, please feel free to raise an issue or reach out to us via email at: [email protected], [email protected]. We're here to assist you!"
robdemunck/finetuned-t5-small-cnn_dailymail
robdemunck
2024-02-24T17:44:44Z
104
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-23T20:10:05Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer model-index: - name: finetuned-t5-small-cnn_dailymail results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-t5-small-cnn_dailymail This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.38.0.dev0 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
LoneStriker/OpenCodeInterpreter-DS-33B-6.0bpw-h6-exl2
LoneStriker
2024-02-24T17:39:01Z
5
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "code", "en", "arxiv:2402.14658", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-24T17:28:39Z
--- language: - en pipeline_tag: text-generation tags: - code --- <h1 align="center"> OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement<h1> <p align="center"> <img width="1000px" alt="OpenCodeInterpreter" src="https://opencodeinterpreter.github.io/static/images/figure1.png"> </p> <p align="center"> <a href="https://opencodeinterpreter.github.io/">[🏠Homepage]</a> | <a href="https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/">[πŸ› οΈCode]</a> </p> <hr> ## Introduction OpenCodeInterpreter is a family of open-source code generation systems designed to bridge the gap between large language models and advanced proprietary systems like the GPT-4 Code Interpreter. It significantly advances code generation capabilities by integrating execution and iterative refinement functionalities. For further information and related work, refer to our paper: ["OpenCodeInterpreter: A System for Enhanced Code Generation and Execution"](https://arxiv.org/abs/2402.14658) available on arXiv. ## Model Usage ### Inference ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_path="OpenCodeInterpreter-DS-33B" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.bfloat16, device_map="auto", ) model.eval() prompt = "Write a function to find the shared elements from the given two lists." inputs = tokenizer.apply_chat_template( [{'role': 'user', 'content': prompt }], return_tensors="pt" ).to(model.device) outputs = model.generate( inputs, max_new_tokens=1024, do_sample=False, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) ``` ## Contact If you have any inquiries, please feel free to raise an issue or reach out to us via email at: [email protected], [email protected]. We're here to assist you!"
archiMAD/ppo-SnowballTarget
archiMAD
2024-02-24T17:20:20Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2024-02-24T17:20:18Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐢 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: archiMAD/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play πŸ‘€
Ayus077BCT014Bhandari/vartat5-using-100K-plus-17
Ayus077BCT014Bhandari
2024-02-24T17:15:39Z
107
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-24T15:17:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
922-SY/ts-v2-gguf
922-SY
2024-02-24T16:52:00Z
4
0
transformers
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:SeaLLMs/SeaLLM-7B-v2", "base_model:quantized:SeaLLMs/SeaLLM-7B-v2", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-02-24T16:48:16Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - gguf base_model: SeaLLMs/SeaLLM-7B-v2 --- # Uploaded model - **Developed by:** 922CA - **License:** apache-2.0 - **Finetuned from model :** SeaLLMs/SeaLLM-7B-v2 This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
LoneStriker/OpenCodeInterpreter-CL-13B-6.0bpw-h6-exl2
LoneStriker
2024-02-24T16:47:57Z
2
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "code", "en", "arxiv:2402.14658", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-24T16:43:34Z
--- language: - en pipeline_tag: text-generation tags: - code --- <h1 align="center"> OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement<h1> <p align="center"> <img width="1000px" alt="OpenCodeInterpreter" src="https://opencodeinterpreter.github.io/static/images/figure1.png"> </p> <p align="center"> <a href="https://opencodeinterpreter.github.io/">[🏠Homepage]</a> | <a href="https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/">[πŸ› οΈCode]</a> </p> <hr> ## Introduction OpenCodeInterpreter is a family of open-source code generation systems designed to bridge the gap between large language models and advanced proprietary systems like the GPT-4 Code Interpreter. It significantly advances code generation capabilities by integrating execution and iterative refinement functionalities. For further information and related work, refer to our paper: ["OpenCodeInterpreter: A System for Enhanced Code Generation and Execution"](https://arxiv.org/abs/2402.14658) available on arXiv. ## Model Usage ### Inference ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_path="OpenCodeInterpreter-CL-13B" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.bfloat16, device_map="auto", ) model.eval() prompt = "Write a function to find the shared elements from the given two lists." inputs = tokenizer.apply_chat_template( [{'role': 'user', 'content': prompt }], return_tensors="pt" ).to(model.device) outputs = model.generate( inputs, max_new_tokens=1024, do_sample=False, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) ``` ## Contact If you have any inquiries, please feel free to raise an issue or reach out to us via email at: [email protected], [email protected]. We're here to assist you!"
LoneStriker/OpenCodeInterpreter-CL-13B-5.0bpw-h6-exl2
LoneStriker
2024-02-24T16:43:33Z
4
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "code", "en", "arxiv:2402.14658", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-24T16:39:56Z
--- language: - en pipeline_tag: text-generation tags: - code --- <h1 align="center"> OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement<h1> <p align="center"> <img width="1000px" alt="OpenCodeInterpreter" src="https://opencodeinterpreter.github.io/static/images/figure1.png"> </p> <p align="center"> <a href="https://opencodeinterpreter.github.io/">[🏠Homepage]</a> | <a href="https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/">[πŸ› οΈCode]</a> </p> <hr> ## Introduction OpenCodeInterpreter is a family of open-source code generation systems designed to bridge the gap between large language models and advanced proprietary systems like the GPT-4 Code Interpreter. It significantly advances code generation capabilities by integrating execution and iterative refinement functionalities. For further information and related work, refer to our paper: ["OpenCodeInterpreter: A System for Enhanced Code Generation and Execution"](https://arxiv.org/abs/2402.14658) available on arXiv. ## Model Usage ### Inference ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_path="OpenCodeInterpreter-CL-13B" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.bfloat16, device_map="auto", ) model.eval() prompt = "Write a function to find the shared elements from the given two lists." inputs = tokenizer.apply_chat_template( [{'role': 'user', 'content': prompt }], return_tensors="pt" ).to(model.device) outputs = model.generate( inputs, max_new_tokens=1024, do_sample=False, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) ``` ## Contact If you have any inquiries, please feel free to raise an issue or reach out to us via email at: [email protected], [email protected]. We're here to assist you!"
Edentns/DataVortexS-10.7B-dpo-v1.8
Edentns
2024-02-24T16:40:16Z
2,290
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "base_model:megastudyedu/M-SOLAR-10.7B-v1.3", "base_model:finetune:megastudyedu/M-SOLAR-10.7B-v1.3", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-29T08:12:38Z
--- tags: - text-generation license: cc-by-nc-sa-4.0 language: - ko base_model: megastudy/M-SOLAR-10.7B-v1.3 pipeline_tag: text-generation --- # **DataVortexS-10.7B-dpo-v1.8** <img src="./DataVortex.png" alt="DataVortex" style="height: 8em;"> ## Our Team | Research & Engineering | Product Management | | :--------------------: | :----------------: | | Kwangseok Yang | Seunghyun Choi | | Jeongwon Choi | Hyoseok Choi | ## **Model Details** ### **Base Model** [megastudy/M-SOLAR-10.7B-v1.3](https://huggingface.co/megastudy/M-SOLAR-10.7B-v1.3) ### **Trained On** - **OS**: Ubuntu 22.04 - **GPU**: H100 80GB 4ea - **transformers**: v4.36.2 ### **Instruction format** It follows **Alpaca (Chat)** format. E.g. ```python text = """\ ### System: 당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€. ### User: λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό? ### Assistant: λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€. ### User: μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야? """ ``` ## **Model Benchmark** ### **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)** | Task | 0-shot | 5-shot | 10-shot | 50-shot | | :--------------- | -----------: | -----------: | -----------: | -----------: | | kobest_boolq | 0.903441 | 0.922987 | 0.919466 | 0.923032 | | kobest_copa | 0.734711 | 0.778697 | 0.773796 | 0.796829 | | kobest_hellaswag | 0.473673 | 0.480091 | 0.491471 | 0.488234 | | kobest_sentineg | 0.536605 | 0.93185 | 0.952136 | 0.949596 | | **Average** | **0.662107** | **0.778406** | **0.784217** | **0.789423** | ### **[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)** | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | ------: | -----: | -----------: | ------: | ------------: | --------------: | | 58.15 | 52.56 | 66.68 | 51.21 | 59.27 | 61.04 | ## **Implementation Code** This model contains the chat_template instruction format. You can use the code below. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.8") tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.8") messages = [ {"role": "system", "content": "당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€."}, {"role": "user", "content": "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό?"}, {"role": "assistant", "content": "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€."}, {"role": "user", "content": "μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## **License** The model is licensed under the [cc-by-nc-sa-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license, which allows others to copy, modify, and share the work non-commercially, as long as they give appropriate credit and distribute any derivative works under the same license. <div align="center"> <a href="https://edentns.com/"> <img src="./Logo.png" alt="Logo" style="height: 3em;"> </a> </div>
nikitha-30/my-pet-dog
nikitha-30
2024-02-24T16:39:58Z
0
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-02-24T16:36:17Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Dog Dreambooth model trained by nikitha-30 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: GoX19932gAS Sample pictures of this concept: ![0](https://huggingface.co/nikitha-30/my-pet-dog/resolve/main/sample_images/pexels-photo-1072179.jpeg)
LoneStriker/OpenCodeInterpreter-CL-13B-4.0bpw-h6-exl2
LoneStriker
2024-02-24T16:39:55Z
2
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "code", "en", "arxiv:2402.14658", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-24T16:36:46Z
--- language: - en pipeline_tag: text-generation tags: - code --- <h1 align="center"> OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement<h1> <p align="center"> <img width="1000px" alt="OpenCodeInterpreter" src="https://opencodeinterpreter.github.io/static/images/figure1.png"> </p> <p align="center"> <a href="https://opencodeinterpreter.github.io/">[🏠Homepage]</a> | <a href="https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/">[πŸ› οΈCode]</a> </p> <hr> ## Introduction OpenCodeInterpreter is a family of open-source code generation systems designed to bridge the gap between large language models and advanced proprietary systems like the GPT-4 Code Interpreter. It significantly advances code generation capabilities by integrating execution and iterative refinement functionalities. For further information and related work, refer to our paper: ["OpenCodeInterpreter: A System for Enhanced Code Generation and Execution"](https://arxiv.org/abs/2402.14658) available on arXiv. ## Model Usage ### Inference ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_path="OpenCodeInterpreter-CL-13B" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.bfloat16, device_map="auto", ) model.eval() prompt = "Write a function to find the shared elements from the given two lists." inputs = tokenizer.apply_chat_template( [{'role': 'user', 'content': prompt }], return_tensors="pt" ).to(model.device) outputs = model.generate( inputs, max_new_tokens=1024, do_sample=False, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) ``` ## Contact If you have any inquiries, please feel free to raise an issue or reach out to us via email at: [email protected], [email protected]. We're here to assist you!"
makiisthebes/autoencoders
makiisthebes
2024-02-24T16:39:35Z
0
1
tf-keras
[ "tf-keras", "arxiv:1910.09700", "region:us" ]
null
2024-02-24T16:09:22Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Auto-Encoders <!-- Provide a quick summary of what the model is/does. --> This model deals with auto-encoders, specifically experimenting and tweaking the architecture, we train a MNIST dataset with artifical Gaussian noise in order to obtain model output of the orginal denoised image. ## Model Details Model outputs include: With 5.4 million parameters, the autoencoder could reconstruct the a 256x256 image of my profile picture from a 8x8 representation. ![image/gif](https://cdn-uploads.huggingface.co/production/uploads/6319030647a84df2a5dd106c/nqZULm7A30EptQ9pNAQki.gif) With 1.2k parameters, ![image/gif](https://cdn-uploads.huggingface.co/production/uploads/6319030647a84df2a5dd106c/pzA2citmpIvtZR403buHa.gif) ------------------ ![Example Autoencoder Diagram](https://cdn-uploads.huggingface.co/production/uploads/6319030647a84df2a5dd106c/_nxtemAHJm6fPVY6qkrJF.png) An image of 256x256 to be encoded by 4x4 images with some loss, but we will be learning weights that allows the decoder and encoder to work together to minimize the loss. The output and input is never the same statistically, because we have some sort of reconstruction loss, and we can design systems that can detect anomolies. We have input data that can be reconstructed. We can use this to detect anomalies. Encoding Typical Processing: - Conv2D - MaxPooling2D (downsamples) Decoding Typical Processing: - UpSampling2D - Conv2D Here we are fitting: ```python model.fit(x, x, epochs=5, shuffle=True) ``` Use cases for autoencoders: - **Anomaly detection**, if we have a datastream from a microscope, and monitoring the output intensity, and many different metrics, ![Signal Anomaly Detection](https://cdn-uploads.huggingface.co/production/uploads/6319030647a84df2a5dd106c/Y8MxdWRJrsL2quw4uHvpI.png) When we have this data, and we have failures, we can detect failure or even predict failures occuring. When we give a input, we want to reconstruction, if there is an anomaly, then the reconstruction error is large and passed a specific threshold, then we can assume that there is an anomaly. - **Denoising**, which we learned in the previous section, is obtaining a reconstruction of the input image, without the noise, as we have been training on it. - **Domain Adaptation**, we are fitting x to x, while training the autoencoder, but we can also fit x to y, and then we can use the encoder to encode the input, to a different domain image. For example training a model to go from Einstein image to a Mona Lisa image. - **Image Colorization**, we can use the encoder to encode a grayscale image, and then use the decoder to decode the image to a color image. ------------------- <!-- Provide a longer summary of what this model is. --> - **Developed by:** Michael Peres - **Model type:** AutoEncoder ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** RTX 3070Ti - **Hours used:** 0.05hr
Kquant03/NeuralTrix-7B-dpo-relaser
Kquant03
2024-02-24T16:31:02Z
72
2
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "mlabonne/OmniBeagle-7B", "flemmingmiguel/MBX-7B-v3", "AiMavenAi/AiMaven-Prometheus", "base_model:AiMavenAi/AiMaven-Prometheus", "base_model:merge:AiMavenAi/AiMaven-Prometheus", "base_model:flemmingmiguel/MBX-7B-v3", "base_model:merge:flemmingmiguel/MBX-7B-v3", "base_model:mlabonne/OmniBeagle-7B", "base_model:merge:mlabonne/OmniBeagle-7B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-17T07:55:29Z
--- tags: - merge - mergekit - lazymergekit - mlabonne/OmniBeagle-7B - flemmingmiguel/MBX-7B-v3 - AiMavenAi/AiMaven-Prometheus base_model: - mlabonne/OmniBeagle-7B - flemmingmiguel/MBX-7B-v3 - AiMavenAi/AiMaven-Prometheus license: apache-2.0 --- # NeuralTrix-7B-v1 NeuralTrix-7B-v1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [mlabonne/OmniBeagle-7B](https://huggingface.co/mlabonne/OmniBeagle-7B) * [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3) * [AiMavenAi/AiMaven-Prometheus](https://huggingface.co/AiMavenAi/AiMaven-Prometheus) It was then trained with DPO using: * https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1 ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 # no parameters necessary for base model - model: mlabonne/OmniBeagle-7B parameters: density: 0.65 weight: 0.4 - model: flemmingmiguel/MBX-7B-v3 parameters: density: 0.6 weight: 0.35 - model: AiMavenAi/AiMaven-Prometheus parameters: density: 0.6 weight: 0.35 merge_method: dare_ties base_model: mistralai/Mistral-7B-v0.1 parameters: int8_mask: true dtype: float16 ``` ## πŸ’» Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "CultriX/NeuralTrix-7B-v1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
VietTung04/open_llama_3b_v2_finetuned_SlimOrca_v2
VietTung04
2024-02-24T16:19:10Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-24T16:19:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Trelis/Llama-2-7b-chat-hf-function-calling-v3
Trelis
2024-02-24T16:13:38Z
128
41
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-2", "gguf", "function-calling", "function calling", "conversational", "en", "arxiv:2307.09288", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-12-04T14:42:08Z
--- language: - en tags: - facebook - meta - pytorch - llama - llama-2 - gguf - function-calling - function calling pipeline_tag: text-generation inference: false arxiv: 2307.09288 --- # Function Calling Fine-tuned Llama 2 Chat This model is fine-tuned for function calling. - The function metadata format is the same as used for OpenAI. - The model is suitable for commercial use and is licensed with the Llama 2 Community license. - A GGUF version is in the gguf branch. Check out other fine-tuned function calling models [here](https://trelis.com/function-calling/). ## Quick Server Setup Runpod one click TGI template [here](https://runpod.io/gsc?template=w13qlaqn59&ref=jmfkcdio). Runpod one click vLLM template [here](https://runpod.io/gsc?template=m0autss7hn&ref=jmfkcdio). Runpod Affiliate [Link](https://runpod.io?ref=jmfkcdio) (helps support the Trelis channel). ## Inference Scripts See below for sample prompt format. Complete inference scripts are available for purchase [here](https://trelis.com/enterprise-server-api-and-inference-guide/): - Easily format prompts using tokenizer.apply_chat_format (starting from openai formatted functions and a list of messages) - Automate catching, handling and chaining of function calls. ## Prompt Format ``` B_FUNC, E_FUNC = "You have access to the following functions. Use them if required:\n\n", "\n\n" B_INST, E_INST = "[INST] ", " [/INST]" #Llama style prompt = f"{B_INST}{B_FUNC}{functionList.strip()}{E_FUNC}{user_prompt.strip()}{E_INST}\n\n" ``` ### Using tokenizer.apply_chat_template For an easier application of the prompt, you can set up as follows: Set up `messages`: ``` [ { "role": "function_metadata", "content": "FUNCTION_METADATA" }, { "role": "user", "content": "What is the current weather in London?" }, { "role": "function_call", "content": "{\n \"name\": \"get_current_weather\",\n \"arguments\": {\n \"city\": \"London\"\n }\n}" }, { "role": "function_response", "content": "{\n \"temperature\": \"15 C\",\n \"condition\": \"Cloudy\"\n}" }, { "role": "assistant", "content": "The current weather in London is Cloudy with a temperature of 15 Celsius" } ] ``` with `FUNCTION_METADATA` as: ``` [ { "type": "function", "function": { "name": "get_current_weather", "description": "This function gets the current weather in a given city", "parameters": { "type": "object", "properties": { "city": { "type": "string", "description": "The city, e.g., San Francisco" }, "format": { "type": "string", "enum": ["celsius", "fahrenheit"], "description": "The temperature unit to use." } }, "required": ["city"] } } }, { "type": "function", "function": { "name": "get_clothes", "description": "This function provides a suggestion of clothes to wear based on the current weather", "parameters": { "type": "object", "properties": { "temperature": { "type": "string", "description": "The temperature, e.g., 15 C or 59 F" }, "condition": { "type": "string", "description": "The weather condition, e.g., 'Cloudy', 'Sunny', 'Rainy'" } }, "required": ["temperature", "condition"] } } } ] ``` and then apply the chat template to get a formatted prompt: ``` tokenizer = AutoTokenizer.from_pretrained('Trelis/Llama-2-7b-chat-hf-function-calling-v3', trust_remote_code=True) prompt = tokenizer.apply_chat_template(prompt, tokenize=False) ``` If you are using a gated model, you need to first run: ``` pip install huggingface_hub huggingface-cli login ``` ### Manual Prompt: ``` [INST] You have access to the following functions. Use them if required: [ { "type": "function", "function": { "name": "get_big_stocks", "description": "Get the names of the largest N stocks by market cap", "parameters": { "type": "object", "properties": { "number": { "type": "integer", "description": "The number of largest stocks to get the names of, e.g. 25" }, "region": { "type": "string", "description": "The region to consider, can be \"US\" or \"World\"." } }, "required": [ "number" ] } } }, { "type": "function", "function": { "name": "get_stock_price", "description": "Get the stock price of an array of stocks", "parameters": { "type": "object", "properties": { "names": { "type": "array", "items": { "type": "string" }, "description": "An array of stocks" } }, "required": [ "names" ] } } } ] [INST] Get the names of the five largest stocks in the US by market cap [/INST] { "name": "get_big_stocks", "arguments": { "number": 5, "region": "US" } }</s> ``` # Dataset See [Trelis/function_calling_v3](https://huggingface.co/datasets/Trelis/function_calling_v3). ~~~ The original repo card follows below. ~~~ # **Llama 2** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes β€” 7B, 13B, and 70B β€” as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212). **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software β€œbug,” or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/meta-llama/Llama-2-7b) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/meta-llama/Llama-2-13b) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf)| |70B| [Link](https://huggingface.co/meta-llama/Llama-2-70b) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf)|
Khadidja22/my_awesome_sentiment_model
Khadidja22
2024-02-24T16:06:56Z
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-24T14:59:53Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: my_awesome_sentiment_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_sentiment_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2227 - Accuracy: 0.9473 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0744 | 1.0 | 2250 | 0.2227 | 0.9473 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
FINNUMBER/Yi-Ko-6B-Finch-TQA-400-epoch8
FINNUMBER
2024-02-24T15:53:12Z
3
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-16T17:51:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
FINNUMBER/Yi-Ko-6B-Finch-ALL-FULL-CorrectX-epoch3
FINNUMBER
2024-02-24T15:53:04Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-24T09:33:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
olonok/flan-t5-base-multi_news
olonok
2024-02-24T15:51:01Z
161
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-base", "base_model:finetune:google/flan-t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-24T15:50:25Z
--- license: apache-2.0 base_model: google/flan-t5-base tags: - generated_from_trainer model-index: - name: flan-t5-base-multi_news results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-base-multi_news This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.1226 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.3803 | 1.0 | 5622 | 2.1816 | | 2.2688 | 2.0 | 11244 | 2.1554 | | 2.1995 | 3.0 | 16866 | 2.1341 | | 2.1854 | 4.0 | 22488 | 2.1352 | | 2.1352 | 5.0 | 28110 | 2.1297 | | 2.1199 | 6.0 | 33732 | 2.1241 | | 2.1218 | 7.0 | 39354 | 2.1218 | | 2.1056 | 8.0 | 44976 | 2.1223 | | 2.0928 | 9.0 | 50598 | 2.1228 | | 2.0834 | 10.0 | 56220 | 2.1226 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
damerajee/Gaja-vv1
damerajee
2024-02-24T15:46:35Z
48
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-24T15:10:05Z
--- library_name: transformers license: llama2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Mahesh9/bart_samsum
Mahesh9
2024-02-24T15:30:25Z
104
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-large-cnn", "base_model:finetune:facebook/bart-large-cnn", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-23T02:53:40Z
--- license: mit base_model: facebook/bart-large-cnn tags: - generated_from_trainer metrics: - rouge model-index: - name: bart_samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart_samsum This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on [SAMSUM](https://huggingface.co/datasets/samsum) dataset. It achieves the following results on the evaluation set: - Loss: 0.4966 - Rouge1: 41.4888 - Rouge2: 21.4374 - Rougel: 32.0455 - Rougelsum: 38.5273 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | 0.525 | 0.54 | 500 | 0.5377 | 39.9053 | 20.1597 | 30.8845 | 37.3644 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.1 - Tokenizers 0.15.2
a-r-r-o-w/dragnuwa-svd
a-r-r-o-w
2024-02-24T15:26:59Z
11
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "diffusers:StableVideoDragNUWAPipeline", "region:us" ]
null
2024-02-24T13:34:11Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sugatoray/mlx-gemma-7b-q4bits
sugatoray
2024-02-24T15:20:04Z
78
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "mlx", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-24T15:12:33Z
--- license: other library_name: transformers tags: - mlx extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms --- # sugatoray/mlx-gemma-7b-q4bits This model was converted to MLX format from [`google/gemma-7b`](). Refer to the [original model card](https://huggingface.co/google/gemma-7b) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("sugatoray/mlx-gemma-7b-q4bits") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
mathugo/crypto_news_bert
mathugo
2024-02-24T15:20:01Z
664
4
transformers
[ "transformers", "safetensors", "roberta", "fill-mask", "crypto", "bitcoin", "news", "eth", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-01-05T13:39:39Z
--- license: apache-2.0 language: - en library_name: transformers metrics: - accuracy tags: - crypto - bitcoin - news - eth - transformers widget: - text: >- Bitcoin Vault (BTCV) traded 5.6% higher against the <mask> during the twenty-four hour period ending at 14:00 PM Eastern on October 7th. In the last week, Bitcoin Vault has traded down 2.7% against the dollar. One Bitcoin Vault coin can now be bought for approximately $2.48 or 0.00012763 BTC on major cryptocurrency exchanges. Bitcoin Vault has a total market cap of $5.20 million and approximately $63,451.00 worth of Bitcoin Vault was traded on exchanges in the last day. Here's how other cryptocurrencies have performed in the last day: Bitcoin (BTC) example_title: MLM 1 - text: >- Good morning. Here's what's <mask>:Prices: Bitcoin started what has historically been a strong month about where it ended a dismal September, holding over $19K.Insights: USDC's stablecoin-fueled model of money, in which the dollar functions as an open 'protocol,' could allow innovation to flourish. But healthy competition is a prerequisite.Catch the latest episodes of CoinDesk TV for insightful interviews with crypto industry leaders and analysis. And sign up for First Mover, our daily newsletter putting the latest moves in crypto markets in context. example_title: MLM 2 pipeline_tag: fill-mask --- CryptoBERT is a pre-trained BERT (Bidirectional Encoder Representations from Transformers) model fine-tuned on a dataset of crypto-related news articles. It is designed to analyze and understand crypto news, providing valuable insights into the rapidly evolving world of cryptocurrencies. ## Features - **Domain-Specific Knowledge**: Trained on a diverse dataset of crypto news, CryptoBERT captures domain-specific information, enabling it to understand the unique language and context of the cryptocurrency space. - **Sentiment Analysis**: CryptoBERT is capable of sentiment analysis, helping you gauge the overall sentiment expressed in crypto news articles, whether it's positive, negative, or neutral. - **Named Entity Recognition (NER)**: The model excels in identifying key entities such as cryptocurrency names, organizations, and important figures, enhancing its ability to extract relevant information. - **Fine-tuned for Crypto Jargon**: CryptoBERT is fine-tuned to recognize and understand the specialized jargon commonly used in the crypto industry, ensuring accurate interpretation of news articles. ## Usage
LoneStriker/OpenCodeInterpreter-DS-6.7B-8.0bpw-h8-exl2
LoneStriker
2024-02-24T15:19:10Z
7
2
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "code", "conversational", "en", "arxiv:2402.14658", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-24T15:16:17Z
--- language: - en pipeline_tag: text-generation tags: - code --- <h1 align="center"> OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement<h1> <p align="center"> <img width="1000px" alt="OpenCodeInterpreter" src="https://opencodeinterpreter.github.io/static/images/figure1.png"> </p> <p align="center"> <a href="https://opencodeinterpreter.github.io/">[🏠Homepage]</a> | <a href="https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/">[πŸ› οΈCode]</a> </p> <hr> ## Introduction OpenCodeInterpreter is a family of open-source code generation systems designed to bridge the gap between large language models and advanced proprietary systems like the GPT-4 Code Interpreter. It significantly advances code generation capabilities by integrating execution and iterative refinement functionalities. For further information and related work, refer to our paper: ["OpenCodeInterpreter: A System for Enhanced Code Generation and Execution"](https://arxiv.org/abs/2402.14658) available on arXiv. ## Model Usage ### Inference ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_path="OpenCodeInterpreter-DS-6.7B" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.bfloat16, device_map="auto", ) model.eval() prompt = "Write a function to find the shared elements from the given two lists." inputs = tokenizer.apply_chat_template( [{'role': 'user', 'content': prompt }], return_tensors="pt" ).to(model.device) outputs = model.generate( inputs, max_new_tokens=1024, do_sample=False, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) ``` ## Contact If you have any inquiries, please feel free to raise an issue or reach out to us via email at: [email protected], [email protected]. We're here to assist you!"
LoneStriker/OpenCodeInterpreter-DS-6.7B-6.0bpw-h6-exl2
LoneStriker
2024-02-24T15:16:15Z
8
2
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "code", "conversational", "en", "arxiv:2402.14658", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-24T15:14:02Z
--- language: - en pipeline_tag: text-generation tags: - code --- <h1 align="center"> OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement<h1> <p align="center"> <img width="1000px" alt="OpenCodeInterpreter" src="https://opencodeinterpreter.github.io/static/images/figure1.png"> </p> <p align="center"> <a href="https://opencodeinterpreter.github.io/">[🏠Homepage]</a> | <a href="https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/">[πŸ› οΈCode]</a> </p> <hr> ## Introduction OpenCodeInterpreter is a family of open-source code generation systems designed to bridge the gap between large language models and advanced proprietary systems like the GPT-4 Code Interpreter. It significantly advances code generation capabilities by integrating execution and iterative refinement functionalities. For further information and related work, refer to our paper: ["OpenCodeInterpreter: A System for Enhanced Code Generation and Execution"](https://arxiv.org/abs/2402.14658) available on arXiv. ## Model Usage ### Inference ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_path="OpenCodeInterpreter-DS-6.7B" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.bfloat16, device_map="auto", ) model.eval() prompt = "Write a function to find the shared elements from the given two lists." inputs = tokenizer.apply_chat_template( [{'role': 'user', 'content': prompt }], return_tensors="pt" ).to(model.device) outputs = model.generate( inputs, max_new_tokens=1024, do_sample=False, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) ``` ## Contact If you have any inquiries, please feel free to raise an issue or reach out to us via email at: [email protected], [email protected]. We're here to assist you!"
Samuael/amhmt5-base-finetuned-amt5
Samuael
2024-02-24T15:12:20Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "base_model:Samuael/amt5-base", "base_model:finetune:Samuael/amt5-base", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-23T13:23:51Z
--- base_model: Samuael/amt5-base tags: - generated_from_trainer model-index: - name: amhmt5-base-finetuned-amt5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # amhmt5-base-finetuned-amt5 This model is a fine-tuned version of [Samuael/amt5-base](https://huggingface.co/Samuael/amt5-base) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.3710 - eval_rouge1: 0.0 - eval_rouge2: 0.0 - eval_rougeL: 0.0 - eval_rougeLsum: 0.0 - eval_gen_len: 18.5905 - eval_wer: 0.6072 - eval_cer: 0.5393 - eval_runtime: 4.0504 - eval_samples_per_second: 88.633 - eval_steps_per_second: 1.481 - epoch: 4.0 - step: 5672 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
LoneStriker/OpenCodeInterpreter-DS-6.7B-3.0bpw-h6-exl2
LoneStriker
2024-02-24T15:10:26Z
5
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "code", "conversational", "en", "arxiv:2402.14658", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-24T15:09:05Z
--- language: - en pipeline_tag: text-generation tags: - code --- <h1 align="center"> OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement<h1> <p align="center"> <img width="1000px" alt="OpenCodeInterpreter" src="https://opencodeinterpreter.github.io/static/images/figure1.png"> </p> <p align="center"> <a href="https://opencodeinterpreter.github.io/">[🏠Homepage]</a> | <a href="https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/">[πŸ› οΈCode]</a> </p> <hr> ## Introduction OpenCodeInterpreter is a family of open-source code generation systems designed to bridge the gap between large language models and advanced proprietary systems like the GPT-4 Code Interpreter. It significantly advances code generation capabilities by integrating execution and iterative refinement functionalities. For further information and related work, refer to our paper: ["OpenCodeInterpreter: A System for Enhanced Code Generation and Execution"](https://arxiv.org/abs/2402.14658) available on arXiv. ## Model Usage ### Inference ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_path="OpenCodeInterpreter-DS-6.7B" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.bfloat16, device_map="auto", ) model.eval() prompt = "Write a function to find the shared elements from the given two lists." inputs = tokenizer.apply_chat_template( [{'role': 'user', 'content': prompt }], return_tensors="pt" ).to(model.device) outputs = model.generate( inputs, max_new_tokens=1024, do_sample=False, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) ``` ## Contact If you have any inquiries, please feel free to raise an issue or reach out to us via email at: [email protected], [email protected]. We're here to assist you!"
LoneStriker/OpenCodeInterpreter-DS-6.7B-GGUF
LoneStriker
2024-02-24T15:07:38Z
927
9
null
[ "gguf", "code", "text-generation", "en", "arxiv:2402.14658", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-02-24T14:56:01Z
--- language: - en pipeline_tag: text-generation tags: - code --- <h1 align="center"> OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement<h1> <p align="center"> <img width="1000px" alt="OpenCodeInterpreter" src="https://opencodeinterpreter.github.io/static/images/figure1.png"> </p> <p align="center"> <a href="https://opencodeinterpreter.github.io/">[🏠Homepage]</a> | <a href="https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/">[πŸ› οΈCode]</a> </p> <hr> ## Introduction OpenCodeInterpreter is a family of open-source code generation systems designed to bridge the gap between large language models and advanced proprietary systems like the GPT-4 Code Interpreter. It significantly advances code generation capabilities by integrating execution and iterative refinement functionalities. For further information and related work, refer to our paper: ["OpenCodeInterpreter: A System for Enhanced Code Generation and Execution"](https://arxiv.org/abs/2402.14658) available on arXiv. ## Model Usage ### Inference ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_path="OpenCodeInterpreter-DS-6.7B" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.bfloat16, device_map="auto", ) model.eval() prompt = "Write a function to find the shared elements from the given two lists." inputs = tokenizer.apply_chat_template( [{'role': 'user', 'content': prompt }], return_tensors="pt" ).to(model.device) outputs = model.generate( inputs, max_new_tokens=1024, do_sample=False, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) ``` ## Contact If you have any inquiries, please feel free to raise an issue or reach out to us via email at: [email protected], [email protected]. We're here to assist you!"
Tgratzi/flan-t5-small-ruleviewer
Tgratzi
2024-02-24T15:03:50Z
105
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-small", "base_model:finetune:google/flan-t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-24T15:03:38Z
--- license: apache-2.0 base_model: google/flan-t5-small tags: - generated_from_trainer model-index: - name: flan-t5-small-ruleviewer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-small-ruleviewer This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0002 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 300 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.2874 | 7.69 | 300 | 0.0036 | | 0.0085 | 15.38 | 600 | 0.0002 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
LarryAIDraw/SDXL_LORA_CHARACTER_ROSA_POKEMON_V1
LarryAIDraw
2024-02-24T15:01:01Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-02-24T14:57:32Z
--- license: creativeml-openrail-m --- https://civitai.com/models/305832/sdxlloracharacterrosa-pokemon
LoneStriker/OpenCodeInterpreter-CL-70B-3.5bpw-h6-exl2
LoneStriker
2024-02-24T14:51:03Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "code", "en", "arxiv:2402.14658", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-24T14:38:23Z
--- language: - en pipeline_tag: text-generation tags: - code --- <h1 align="center"> OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement<h1> <p align="center"> <img width="1000px" alt="OpenCodeInterpreter" src="https://opencodeinterpreter.github.io/static/images/figure1.png"> </p> <p align="center"> <a href="https://opencodeinterpreter.github.io/">[🏠Homepage]</a> | <a href="https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/">[πŸ› οΈCode]</a> </p> <hr> ## Introduction OpenCodeInterpreter is a family of open-source code generation systems designed to bridge the gap between large language models and advanced proprietary systems like the GPT-4 Code Interpreter. It significantly advances code generation capabilities by integrating execution and iterative refinement functionalities. For further information and related work, refer to our paper: ["OpenCodeInterpreter: A System for Enhanced Code Generation and Execution"](https://arxiv.org/abs/2402.14658) available on arXiv. ## Model Usage ### Inference ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_path="OpenCodeInterpreter-CL-70B" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.bfloat16, device_map="auto", ) model.eval() prompt = "Write a function to find the shared elements from the given two lists." inputs = tokenizer.apply_chat_template( [{'role': 'user', 'content': prompt }], return_tensors="pt" ).to(model.device) outputs = model.generate( inputs, max_new_tokens=1024, do_sample=False, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) ``` ## Contact If you have any inquiries, please feel free to raise an issue or reach out to us via email at: [email protected], [email protected]. We're here to assist you!"
Khadidja22/my_awesome_model
Khadidja22
2024-02-24T14:49:08Z
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-24T14:14:12Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: my_awesome_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2393 - Accuracy: 0.945 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1742 | 1.0 | 2250 | 0.1815 | 0.946 | | 0.1128 | 2.0 | 4500 | 0.1983 | 0.9445 | | 0.0666 | 3.0 | 6750 | 0.2393 | 0.945 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
tptodorov/ppo-LunarLander-v2
tptodorov
2024-02-24T14:38:35Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-24T14:27:15Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 245.05 +/- 50.32 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Vandhupriya/my-pet-dog
Vandhupriya
2024-02-24T14:38:35Z
1
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-02-24T14:34:35Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Dog Dreambooth model trained by Vandhupriya following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: GoX19932gAS Sample pictures of this concept: ![0](https://huggingface.co/Vandhupriya/my-pet-dog/resolve/main/sample_images/cat-1045782_1920.jpg)
LoneStriker/OpenCodeInterpreter-CL-70B-2.65bpw-h6-exl2
LoneStriker
2024-02-24T14:38:22Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "code", "en", "arxiv:2402.14658", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-24T14:28:46Z
--- language: - en pipeline_tag: text-generation tags: - code --- <h1 align="center"> OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement<h1> <p align="center"> <img width="1000px" alt="OpenCodeInterpreter" src="https://opencodeinterpreter.github.io/static/images/figure1.png"> </p> <p align="center"> <a href="https://opencodeinterpreter.github.io/">[🏠Homepage]</a> | <a href="https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/">[πŸ› οΈCode]</a> </p> <hr> ## Introduction OpenCodeInterpreter is a family of open-source code generation systems designed to bridge the gap between large language models and advanced proprietary systems like the GPT-4 Code Interpreter. It significantly advances code generation capabilities by integrating execution and iterative refinement functionalities. For further information and related work, refer to our paper: ["OpenCodeInterpreter: A System for Enhanced Code Generation and Execution"](https://arxiv.org/abs/2402.14658) available on arXiv. ## Model Usage ### Inference ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_path="OpenCodeInterpreter-CL-70B" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.bfloat16, device_map="auto", ) model.eval() prompt = "Write a function to find the shared elements from the given two lists." inputs = tokenizer.apply_chat_template( [{'role': 'user', 'content': prompt }], return_tensors="pt" ).to(model.device) outputs = model.generate( inputs, max_new_tokens=1024, do_sample=False, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) ``` ## Contact If you have any inquiries, please feel free to raise an issue or reach out to us via email at: [email protected], [email protected]. We're here to assist you!"
kumbi500/FT_DistilBERT
kumbi500
2024-02-24T14:37:24Z
105
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-24T13:58:38Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall base_model: distilbert-base-uncased model-index: - name: FT_DistilBERT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # FT_DistilBERT This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2519 - Accuracy: 0.8892 - F1: 0.8892 - Precision: 0.8904 - Recall: 0.8900 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.3172 | 1.0 | 1000 | 0.2984 | 0.8745 | 0.8740 | 0.8772 | 0.8734 | | 0.2419 | 2.0 | 2000 | 0.2519 | 0.8892 | 0.8892 | 0.8904 | 0.8900 | | 0.2102 | 3.0 | 3000 | 0.2963 | 0.8955 | 0.8955 | 0.8960 | 0.8960 | | 0.1679 | 4.0 | 4000 | 0.3012 | 0.9005 | 0.9004 | 0.9007 | 0.9002 | | 0.1569 | 5.0 | 5000 | 0.3147 | 0.8958 | 0.8957 | 0.8958 | 0.8956 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
LoneStriker/OpenCodeInterpreter-CL-70B-6.0bpw-h6-exl2
LoneStriker
2024-02-24T14:28:45Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "code", "en", "arxiv:2402.14658", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-24T14:07:39Z
--- language: - en pipeline_tag: text-generation tags: - code --- <h1 align="center"> OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement<h1> <p align="center"> <img width="1000px" alt="OpenCodeInterpreter" src="https://opencodeinterpreter.github.io/static/images/figure1.png"> </p> <p align="center"> <a href="https://opencodeinterpreter.github.io/">[🏠Homepage]</a> | <a href="https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/">[πŸ› οΈCode]</a> </p> <hr> ## Introduction OpenCodeInterpreter is a family of open-source code generation systems designed to bridge the gap between large language models and advanced proprietary systems like the GPT-4 Code Interpreter. It significantly advances code generation capabilities by integrating execution and iterative refinement functionalities. For further information and related work, refer to our paper: ["OpenCodeInterpreter: A System for Enhanced Code Generation and Execution"](https://arxiv.org/abs/2402.14658) available on arXiv. ## Model Usage ### Inference ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_path="OpenCodeInterpreter-CL-70B" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.bfloat16, device_map="auto", ) model.eval() prompt = "Write a function to find the shared elements from the given two lists." inputs = tokenizer.apply_chat_template( [{'role': 'user', 'content': prompt }], return_tensors="pt" ).to(model.device) outputs = model.generate( inputs, max_new_tokens=1024, do_sample=False, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) ``` ## Contact If you have any inquiries, please feel free to raise an issue or reach out to us via email at: [email protected], [email protected]. We're here to assist you!"
Edentns/DataVortexM-7B-Instruct-v0.1
Edentns
2024-02-24T14:19:16Z
2,247
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "ko", "dataset:beomi/KoAlpaca-v1.1a", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-04T00:21:22Z
--- tags: - text-generation license: cc-by-nc-sa-4.0 language: - ko base_model: mistralai/Mistral-7B-Instruct-v0.2 pipeline_tag: text-generation datasets: - beomi/KoAlpaca-v1.1a --- # **DataVortexM-7B-Instruct-v0.1** <img src="./DataVortex.png" alt="DataVortex" style="height: 8em;"> ## Our Team | Research & Engineering | Product Management | | :--------------------: | :----------------: | | Kwangseok Yang | Seunghyun Choi | | Jeongwon Choi | Hyoseok Choi | ## **Model Details** ### **Base Model** [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) ### **Trained On** - **OS**: Ubuntu 20.04 - **GPU**: H100 80GB 4ea - **transformers**: v4.36.2 ### **Dataset** - [beomi/KoAlpaca-v1.1a](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a) ### **Instruction format** It follows **Alpaca** format. E.g. ```python text = """\ 당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€. ### Instruction: λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό? ### Response: λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€. ### Instruction: μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야? """ ``` ## **Model Benchmark** ### **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)** On Benchmarking ... | Task | 0-shot | 5-shot | 10-shot | 50-shot | | :--------------- | -----: | -----: | ------: | ------: | | kobest_boolq | 0.0 | 0.0 | 0.0 | 0.0 | | kobest_copa | 0.0 | 0.0 | 0.0 | 0.0 | | kobest_hellaswag | 0.0 | 0.0 | 0.0 | 0.0 | | kobest_sentineg | 0.0 | 0.0 | 0.0 | 0.0 | ### **[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)** | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | ------: | -----: | -----------: | ------: | ------------: | --------------: | | 39.81 | 34.13 | 42.35 | 38.73 | 45.46 | 38.37 | ## **Implementation Code** This model contains the chat_template instruction format. You can use the code below. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexM-7B-Instruct-v0.1") tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexM-7B-Instruct-v0.1") messages = [ {"role": "system", "content": "당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€."}, {"role": "user", "content": "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό?"}, {"role": "assistant", "content": "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€."}, {"role": "user", "content": "μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## **License** The model is licensed under the [cc-by-nc-sa-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license, which allows others to copy, modify, and share the work non-commercially, as long as they give appropriate credit and distribute any derivative works under the same license. <div align="center"> <a href="https://edentns.com/"> <img src="./Logo.png" alt="Logo" style="height: 3em;"> </a> </div>
Edentns/DataVortexTL-1.1B-v0.1
Edentns
2024-02-24T14:19:06Z
2,374
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "dataset:beomi/KoAlpaca-v1.1a", "dataset:jojo0217/korean_rlhf_dataset", "dataset:kyujinpy/OpenOrca-KO", "dataset:nlpai-lab/kullm-v2", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-09T00:14:38Z
--- tags: - text-generation license: cc-by-nc-sa-4.0 language: - ko base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 pipeline_tag: text-generation datasets: - beomi/KoAlpaca-v1.1a - jojo0217/korean_rlhf_dataset - kyujinpy/OpenOrca-KO - nlpai-lab/kullm-v2 widget: - text: > <|system|> You are a chatbot who answers User's questions. <|user|> λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό? <|assistant|> --- # **DataVortexTL-1.1B-v0.1** <img src="./DataVortex.png" alt="DataVortex" style="height: 8em;"> ## Our Team | Research & Engineering | Product Management | | :--------------------: | :----------------: | | Kwangseok Yang | Seunghyun Choi | | Jeongwon Choi | Hyoseok Choi | ## **Model Details** ### **Base Model** [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) ### **Trained On** - **OS**: Ubuntu 20.04 - **GPU**: H100 80GB 1ea - **transformers**: v4.36.2 ### **Dataset** - [beomi/KoAlpaca-v1.1a](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a) - [jojo0217/korean_rlhf_dataset](https://huggingface.co/datasets/jojo0217/korean_rlhf_dataset) - [kyujinpy/OpenOrca-KO](https://huggingface.co/datasets/kyujinpy/OpenOrca-KO) - [nlpai-lab/kullm-v2](https://huggingface.co/datasets/nlpai-lab/kullm-v2) ### **Instruction format** It follows **TinyLlama** format. E.g. ```python text = """\ <|system|> 당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€.</s> <|user|> λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό?</s> <|assistant|> λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€.</s> <|user|> μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야?</s> """ ``` ## **Model Benchmark** ### **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)** | Task | 0-shot | 5-shot | 10-shot | 50-shot | | :--------------- | -------------: | -------------: | -------------: | -----------: | | kobest_boolq | 0.334282 | 0.516446 | 0.500478 | 0.498941 | | kobest_copa | 0.515061 | 0.504321 | 0.492927 | 0.50809 | | kobest_hellaswag | 0.36253 | 0.357733 | 0.355873 | 0.376502 | | kobest_sentineg | 0.481146 | 0.657411 | 0.687417 | 0.635703 | | **Average** | **0.42325475** | **0.50897775** | **0.50917375** | **0.504809** | ### **[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)** | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | ------: | -----: | -----------: | ------: | ------------: | --------------: | | 31.5 | 25.26 | 33.53 | 24.56 | 43.34 | 30.81 | ## **Implementation Code** This model contains the chat_template instruction format. You can use the code below. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexTL-1.1B-v0.1") tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexTL-1.1B-v0.1") messages = [ {"role": "system", "content": "당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€."}, {"role": "user", "content": "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό?"}, {"role": "assistant", "content": "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€."}, {"role": "user", "content": "μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## **License** The model is licensed under the [cc-by-nc-sa-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license, which allows others to copy, modify, and share the work non-commercially, as long as they give appropriate credit and distribute any derivative works under the same license. <div align="center"> <a href="https://edentns.com/"> <img src="./Logo.png" alt="Logo" style="height: 3em;"> </a> </div>
Edentns/DataVortexS-10.7B-v0.2
Edentns
2024-02-24T14:18:32Z
2,247
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "dataset:beomi/KoAlpaca-v1.1a", "dataset:Edentns/Worktronics-FAQ", "base_model:hyeogi/SOLAR-10.7B-dpo-v0.1", "base_model:finetune:hyeogi/SOLAR-10.7B-dpo-v0.1", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-04T15:35:36Z
--- tags: - text-generation license: cc-by-nc-sa-4.0 language: - ko base_model: hyeogi/SOLAR-10.7B-dpo-v0.1 pipeline_tag: text-generation datasets: - beomi/KoAlpaca-v1.1a - Edentns/Worktronics-FAQ --- # **DataVortexS-10.7B-v0.2** <img src="./DataVortex.png" alt="DataVortex" style="height: 8em;"> ## Our Team | Research & Engineering | Product Management | | :--------------------: | :----------------: | | Kwangseok Yang | Seunghyun Choi | | Jeongwon Choi | Hyoseok Choi | ## **Model Details** ### **Base Model** [hyeogi/SOLAR-10.7B-dpo-v0.1](https://huggingface.co/hyeogi/SOLAR-10.7B-dpo-v0.1) ### **Trained On** - **OS**: Ubuntu 20.04 - **GPU**: H100 80GB 1ea - **transformers**: v4.36.2 ### **Dataset** - [beomi/KoAlpaca-v1.1a](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a) - Edentns/Worktronics-FAQ - private ### **Instruction format** It follows **Alpaca** format. E.g. ```python text = """\ 당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€. ### Instruction: λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό? ### Response: λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€. ### Instruction: μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야? """ ``` ## **Model Benchmark** ### **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)** | Task | 0-shot | 5-shot | 10-shot | 50-shot | | :--------------- | ------------: | -------------: | -------------: | -------------: | | kobest_boolq | 0.501449 | 0.668845 | 0.652565 | 0.655491 | | kobest_copa | 0.635474 | 0.685637 | 0.708601 | 0.725683 | | kobest_hellaswag | 0.417966 | 0.442942 | 0.428077 | 0.425199 | | kobest_sentineg | 0.681941 | 0.880517 | 0.921754 | 0.939528 | | **Average** | **0.5592075** | **0.66948525** | **0.67774925** | **0.68647525** | ### **[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)** | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | ------: | -----: | -----------: | ------: | ------------: | --------------: | | 43.6 | 38.74 | 50.74 | 38.98 | 44.7 | 44.86 | ## **Implementation Code** This model contains the chat_template instruction format. You can use the code below. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexS-10.7B-v0.2") tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexS-10.7B-v0.2") messages = [ {"role": "system", "content": "당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€."}, {"role": "user", "content": "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό?"}, {"role": "assistant", "content": "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€."}, {"role": "user", "content": "μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## **License** The model is licensed under the [cc-by-nc-sa-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license, which allows others to copy, modify, and share the work non-commercially, as long as they give appropriate credit and distribute any derivative works under the same license. <div align="center"> <a href="https://edentns.com/"> <img src="./Logo.png" alt="Logo" style="height: 3em;"> </a> </div>
Edentns/DataVortexS-10.7B-v0.3
Edentns
2024-02-24T14:18:23Z
2,250
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "dataset:jojo0217/korean_rlhf_dataset", "base_model:hyeogi/SOLAR-10.7B-dpo-v0.1", "base_model:finetune:hyeogi/SOLAR-10.7B-dpo-v0.1", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-06T03:06:51Z
--- tags: - text-generation license: cc-by-nc-sa-4.0 language: - ko base_model: hyeogi/SOLAR-10.7B-dpo-v0.1 pipeline_tag: text-generation datasets: - jojo0217/korean_rlhf_dataset --- # **DataVortexS-10.7B-v0.3** <img src="./DataVortex.png" alt="DataVortex" style="height: 8em;"> ## Our Team | Research & Engineering | Product Management | | :--------------------: | :----------------: | | Kwangseok Yang | Seunghyun Choi | | Jeongwon Choi | Hyoseok Choi | ## **Model Details** ### **Base Model** [hyeogi/SOLAR-10.7B-dpo-v0.1](https://huggingface.co/hyeogi/SOLAR-10.7B-dpo-v0.1) ### **Trained On** - **OS**: Ubuntu 20.04 - **GPU**: H100 80GB 1ea - **transformers**: v4.36.2 ### **Dataset** - [jojo0217/korean_rlhf_dataset](https://huggingface.co/datasets/jojo0217/korean_rlhf_dataset) ### **Instruction format** It follows **Alpaca** format. E.g. ```python text = """\ 당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€. ### Instruction: λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό? ### Response: λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€. ### Instruction: μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야? """ ``` ## **Model Benchmark** ### **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)** | Task | 0-shot | 5-shot | 10-shot | 50-shot | | :--------------- | -------------: | -------------: | ------------: | -------------: | | kobest_boolq | 0.606754 | 0.553485 | 0.583201 | 0.587602 | | kobest_copa | 0.603643 | 0.625567 | 0.618533 | 0.627404 | | kobest_hellaswag | 0.360793 | 0.366002 | 0.37105 | 0.357439 | | kobest_sentineg | 0.652929 | 0.751097 | 0.742426 | 0.760152 | | **Average** | **0.55602975** | **0.57403775** | **0.5788025** | **0.58314925** | ### **[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)** | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | ------: | -----: | -----------: | ------: | ------------: | --------------: | | 37.57 | 33.87 | 42.47 | 28.21 | 46.09 | 37.19 | ## **Implementation Code** This model contains the chat_template instruction format. You can use the code below. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexS-10.7B-v0.3") tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexS-10.7B-v0.3") messages = [ {"role": "system", "content": "당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€."}, {"role": "user", "content": "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό?"}, {"role": "assistant", "content": "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€."}, {"role": "user", "content": "μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## **License** The model is licensed under the [cc-by-nc-sa-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license, which allows others to copy, modify, and share the work non-commercially, as long as they give appropriate credit and distribute any derivative works under the same license. <div align="center"> <a href="https://edentns.com/"> <img src="./Logo.png" alt="Logo" style="height: 3em;"> </a> </div>
Edentns/DataVortexS-10.7B-v1.0
Edentns
2024-02-24T14:18:15Z
2,302
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "base_model:megastudyedu/M-SOLAR-10.7B-v1.3", "base_model:finetune:megastudyedu/M-SOLAR-10.7B-v1.3", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-15T00:00:06Z
--- tags: - text-generation license: cc-by-nc-sa-4.0 language: - ko base_model: megastudy/M-SOLAR-10.7B-v1.3 pipeline_tag: text-generation --- # **DataVortexS-10.7B-v1.0** <img src="./DataVortex.png" alt="DataVortex" style="height: 8em;"> ## Our Team | Research & Engineering | Product Management | | :--------------------: | :----------------: | | Kwangseok Yang | Seunghyun Choi | | Jeongwon Choi | Hyoseok Choi | ## **Model Details** ### **Base Model** [megastudy/M-SOLAR-10.7B-v1.3](https://huggingface.co/megastudy/M-SOLAR-10.7B-v1.3) ### **Trained On** - **OS**: Ubuntu 20.04 - **GPU**: H100 80GB 4ea - **transformers**: v4.36.2 ### **Instruction format** It follows **Alpaca** format. E.g. ```python text = """\ ### System: 당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€. ### User: λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό? ### Assistant: λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€. ### User: μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야? """ ``` ## **Model Benchmark** ### **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)** | Task | 0-shot | 5-shot | 10-shot | 50-shot | | :--------------- | -------------: | -------------: | -------------: | ------------: | | kobest_boolq | 0.334282 | 0.334282 | 0.334282 | 0.769923 | | kobest_copa | 0.480501 | 0.475746 | 0.46338 | 0.475528 | | kobest_hellaswag | 0.225818 | 0.240596 | 0.234316 | 0.449779 | | kobest_sentineg | 0.33165 | 0.386189 | 0.366913 | 0.360296 | | **Average** | **0.34306275** | **0.35920325** | **0.34972275** | **0.5138815** | ### **[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)** | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | ------: | -----: | -----------: | ------: | ------------: | --------------: | | 40.75 | 49.06 | 25.66 | 53.63 | 45.76 | 29.63 | ## **Implementation Code** This model contains the chat_template instruction format. You can use the code below. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexS-10.7B-v1.0") tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexS-10.7B-v1.0") messages = [ {"role": "system", "content": "당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€."}, {"role": "user", "content": "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό?"}, {"role": "assistant", "content": "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€."}, {"role": "user", "content": "μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## **License** The model is licensed under the [cc-by-nc-sa-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license, which allows others to copy, modify, and share the work non-commercially, as long as they give appropriate credit and distribute any derivative works under the same license. <div align="center"> <a href="https://edentns.com/"> <img src="./Logo.png" alt="Logo" style="height: 3em;"> </a> </div>
Edentns/DataVortexS-10.7B-v0.4
Edentns
2024-02-24T14:18:06Z
2,247
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "dataset:Edentns/data_go_kr-PublicDoc", "dataset:Edentns/aihub-TL_unanswerable_output", "dataset:Edentns/aihub-TL_span_extraction_how_output", "dataset:Edentns/aihub-TL_multiple_choice_output", "dataset:Edentns/aihub-TL_text_entailment_output", "dataset:jojo0217/korean_rlhf_dataset", "dataset:kyujinpy/KOR-OpenOrca-Platypus-v3", "dataset:beomi/KoAlpaca-v1.1a", "dataset:HumanF-MarkrAI/WIKI_QA_Near_dedup", "base_model:LDCC/LDCC-SOLAR-10.7B", "base_model:finetune:LDCC/LDCC-SOLAR-10.7B", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-11T00:03:19Z
--- tags: - text-generation license: cc-by-nc-sa-4.0 language: - ko base_model: LDCC/LDCC-SOLAR-10.7B pipeline_tag: text-generation datasets: - Edentns/data_go_kr-PublicDoc - Edentns/aihub-TL_unanswerable_output - Edentns/aihub-TL_span_extraction_how_output - Edentns/aihub-TL_multiple_choice_output - Edentns/aihub-TL_text_entailment_output - jojo0217/korean_rlhf_dataset - kyujinpy/KOR-OpenOrca-Platypus-v3 - beomi/KoAlpaca-v1.1a - HumanF-MarkrAI/WIKI_QA_Near_dedup --- # **DataVortexS-10.7B-v0.4** <img src="./DataVortex.png" alt="DataVortex" style="height: 8em;"> ## Our Team | Research & Engineering | Product Management | | :--------------------: | :----------------: | | Kwangseok Yang | Seunghyun Choi | | Jeongwon Choi | Hyoseok Choi | ## **Model Details** ### **Base Model** [LDCC/LDCC-SOLAR-10.7B](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B) ### **Trained On** - **OS**: Ubuntu 20.04 - **GPU**: H100 80GB 2ea - **transformers**: v4.36.2 ### **Dataset** - Edentns/data_go_kr-PublicDoc - private - Edentns/aihub-TL_unanswerable_output - private - Edentns/aihub-TL_span_extraction_how_output - private - Edentns/aihub-TL_multiple_choice_output - private - Edentns/aihub-TL_text_entailment_output - private - [jojo0217/korean_rlhf_dataset](https://huggingface.co/datasets/jojo0217/korean_rlhf_dataset) - [kyujinpy/KOR-OpenOrca-Platypus-v3](https://huggingface.co/datasets/kyujinpy/KOR-OpenOrca-Platypus-v3) - [beomi/KoAlpaca-v1.1a](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a) - [HumanF-MarkrAI/WIKI_QA_Near_dedup](https://huggingface.co/datasets/HumanF-MarkrAI/WIKI_QA_Near_dedup) ### **Instruction format** It follows **Alpaca** format. E.g. ```python text = """\ 당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€. ### Instruction: λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό? ### Response: λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€. ### Instruction: μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야? """ ``` ## **Model Benchmark** ### **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)** | Task | 0-shot | 5-shot | 10-shot | 50-shot | | :--------------- | ----------: | -------------: | -------------: | -----------: | | kobest_boolq | 0.389066 | 0.912924 | 0.912808 | 0.906428 | | kobest_copa | 0.744865 | 0.747742 | 0.768856 | 0.785896 | | kobest_hellaswag | 0.455793 | 0.443909 | 0.465783 | 0.472771 | | kobest_sentineg | 0.584156 | 0.947082 | 0.962216 | 0.954657 | | **Average** | **0.54347** | **0.76291425** | **0.77741575** | **0.779938** | ### **[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)** | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | ------: | -----: | -----------: | ------: | ------------: | --------------: | | 54.15 | 49.4 | 59.7 | 54.63 | 47.5 | 59.5 | ## **Implementation Code** This model contains the chat_template instruction format. You can use the code below. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexS-10.7B-v0.4") tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexS-10.7B-v0.4") messages = [ {"role": "system", "content": "당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€."}, {"role": "user", "content": "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό?"}, {"role": "assistant", "content": "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€."}, {"role": "user", "content": "μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## **License** The model is licensed under the [cc-by-nc-sa-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license, which allows others to copy, modify, and share the work non-commercially, as long as they give appropriate credit and distribute any derivative works under the same license. <div align="center"> <a href="https://edentns.com/"> <img src="./Logo.png" alt="Logo" style="height: 3em;"> </a> </div>
Edentns/DataVortexS-10.7B-dpo-v0.1
Edentns
2024-02-24T14:17:55Z
2,247
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "dataset:mncai/orca_dpo_pairs_ko", "dataset:Ja-ck/Orca-DPO-Pairs-KO", "dataset:We-Want-GPU/Yi-Ko-DPO-Orca-DPO-Pairs", "base_model:LDCC/LDCC-SOLAR-10.7B", "base_model:finetune:LDCC/LDCC-SOLAR-10.7B", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-10T04:26:51Z
--- tags: - text-generation license: cc-by-nc-sa-4.0 language: - ko base_model: LDCC/LDCC-SOLAR-10.7B pipeline_tag: text-generation datasets: - mncai/orca_dpo_pairs_ko - Ja-ck/Orca-DPO-Pairs-KO - We-Want-GPU/Yi-Ko-DPO-Orca-DPO-Pairs --- # **DataVortexS-10.7B-dpo-v0.1** <img src="./DataVortex.png" alt="DataVortex" style="height: 8em;"> ## Our Team | Research & Engineering | Product Management | | :--------------------: | :----------------: | | Kwangseok Yang | Seunghyun Choi | | Jeongwon Choi | Hyoseok Choi | ## **Model Details** ### **Base Model** [LDCC/LDCC-SOLAR-10.7B](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B) ### **Trained On** - **OS**: Ubuntu 20.04 - **GPU**: H100 80GB 2ea - **transformers**: v4.36.2 ### **Dataset** - [mncai/orca_dpo_pairs_ko](https://huggingface.co/datasets/mncai/orca_dpo_pairs_ko) - [Ja-ck/Orca-DPO-Pairs-KO](https://huggingface.co/datasets/Ja-ck/Orca-DPO-Pairs-KO) - [We-Want-GPU/Yi-Ko-DPO-Orca-DPO-Pairs](https://huggingface.co/datasets/We-Want-GPU/Yi-Ko-DPO-Orca-DPO-Pairs) ### **Instruction format** It follows **Alpaca** format. E.g. ```python text = """\ 당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€. ### User: λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό? ### Assistant: λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€. ### User: μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야? """ ``` ## **Model Benchmark** ### **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)** | Task | 0-shot | 5-shot | 10-shot | 50-shot | | :--------------- | ------------: | -------------: | -----------: | -------------: | | kobest_boolq | 0.334282 | 0.891367 | 0.896755 | 0.884441 | | kobest_copa | 0.697763 | 0.716762 | 0.724769 | 0.751746 | | kobest_hellaswag | 0.432047 | 0.458301 | 0.443993 | 0.458232 | | kobest_sentineg | 0.49353 | 0.954657 | 0.964735 | 0.949606 | | **Average** | **0.4894055** | **0.75527175** | **0.757563** | **0.76100625** | ### **[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)** | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | ------: | -----: | -----------: | ------: | ------------: | --------------: | | 53.21 | 47.87 | 57.18 | 54.82 | 53.64 | 52.54 | ## **Implementation Code** This model contains the chat_template instruction format. You can use the code below. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v0.1") tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v0.1") messages = [ {"role": "system", "content": "당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€."}, {"role": "user", "content": "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό?"}, {"role": "assistant", "content": "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€."}, {"role": "user", "content": "μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## **License** The model is licensed under the [cc-by-nc-sa-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license, which allows others to copy, modify, and share the work non-commercially, as long as they give appropriate credit and distribute any derivative works under the same license. <div align="center"> <a href="https://edentns.com/"> <img src="./Logo.png" alt="Logo" style="height: 3em;"> </a> </div>
kamran29/whisper-small-en-kamran-sahil
kamran29
2024-02-24T14:17:49Z
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "en", "dataset:sahilkadge/medical_audio_dataset", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-02-22T09:49:41Z
--- language: - en license: apache-2.0 base_model: openai/whisper-small tags: - hf-asr-leaderboard - generated_from_trainer datasets: - sahilkadge/medical_audio_dataset model-index: - name: Whisper Small en - Kamran_Sahil results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small en - Kamran_Sahil This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 10 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - training_steps: 200 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
Edentns/DataVortexS-10.7B-dpo-v1.4
Edentns
2024-02-24T14:17:27Z
2,243
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "base_model:yanolja/Bookworm-10.7B-v0.4-DPO", "base_model:finetune:yanolja/Bookworm-10.7B-v0.4-DPO", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T00:47:24Z
--- tags: - text-generation license: cc-by-nc-4.0 language: - ko base_model: yanolja/Bookworm-10.7B-v0.4-DPO pipeline_tag: text-generation --- # **DataVortexS-10.7B-dpo-v1.4** <img src="./DataVortex.png" alt="DataVortex" style="height: 8em;"> ## Our Team | Research & Engineering | Product Management | | :--------------------: | :----------------: | | Kwangseok Yang | Seunghyun Choi | | Jeongwon Choi | Hyoseok Choi | ## **Model Details** ### **Base Model** [yanolja/Bookworm-10.7B-v0.4-DPO](https://huggingface.co/yanolja/Bookworm-10.7B-v0.4-DPO) ### **Trained On** - **OS**: Ubuntu 22.04 - **GPU**: H100 80GB 4ea - **transformers**: v4.36.2 ### **Instruction format** It follows **ChatML** format. E.g. ```python text = """\ <|im_start|>system 당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€.<|im_end|> <|im_start|>user λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό?<|im_end|> <|im_start|>assistant λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€.<|im_end|> <|im_start|>user μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야?<|im_end|> <|im_start|>assistant """ ``` ## **Model Benchmark** ### **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)** | Task | 0-shot | 5-shot | 10-shot | 50-shot | | :--------------- | -----------: | -----------: | -----------: | -----------: | | kobest_boolq | 0.757911 | 0.907177 | 0.924496 | 0.605075 | | kobest_copa | 0.740605 | 0.801886 | 0.831886 | 0.849978 | | kobest_hellaswag | 0.445176 | 0.454788 | 0.468654 | 0.45218 | | kobest_sentineg | 0.415445 | 0.95214 | 0.962217 | 0.967254 | | **Average** | **0.589784** | **0.778998** | **0.796813** | **0.718622** | ### **[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)** | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | ------: | -----: | -----------: | ------: | ------------: | --------------: | | 53.81 | 52.05 | 62.93 | 53.59 | 50.42 | 50.06 | ## **Implementation Code** This model contains the chat_template instruction format. You can use the code below. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.4") tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.4") messages = [ {"role": "system", "content": "당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€."}, {"role": "user", "content": "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό?"}, {"role": "assistant", "content": "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€."}, {"role": "user", "content": "μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## **License** This model is licensed under the [cc-by-nc-4.0](https://creativecommons.org/licenses/by-nc/4.0/). which allows others to share and adapt the model for non-commercial purposes. <div align="center"> <a href="https://edentns.com/"> <img src="./Logo.png" alt="Logo" style="height: 3em;"> </a> </div>
Edentns/DataVortexS-10.7B-dpo-v1.10
Edentns
2024-02-24T14:16:33Z
2,247
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "base_model:beomi/OPEN-SOLAR-KO-10.7B", "base_model:finetune:beomi/OPEN-SOLAR-KO-10.7B", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-01T07:02:25Z
--- tags: - text-generation license: cc-by-nc-4.0 language: - ko base_model: beomi/OPEN-SOLAR-KO-10.7B pipeline_tag: text-generation --- # **DataVortexS-10.7B-dpo-v1.10** <img src="./DataVortex.png" alt="DataVortex" style="height: 8em;"> ## Our Team | Research & Engineering | Product Management | | :--------------------: | :----------------: | | Kwangseok Yang | Seunghyun Choi | | Jeongwon Choi | Hyoseok Choi | ## **Model Details** ### **Base Model** [beomi/OPEN-SOLAR-KO-10.7B](https://huggingface.co/beomi/OPEN-SOLAR-KO-10.7B) ### **Trained On** - **OS**: Ubuntu 22.04 - **GPU**: H100 80GB 4ea - **transformers**: v4.36.2 ### **Instruction format** It follows **Alpaca (Chat)** format. E.g. ```python text = """\ ### System: 당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€. ### User: λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό? ### Assistant: λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€. ### User: μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야? """ ``` ## **Model Benchmark** ### **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)** | Task | 0-shot | 5-shot | 10-shot | 50-shot | | :--------------- | -----------: | -----------: | -----------: | -----------: | | kobest_boolq | 0.874261 | 0.897165 | 0.904985 | 0.907857 | | kobest_copa | 0.807479 | 0.845701 | 0.860809 | 0.8719 | | kobest_hellaswag | 0.504865 | 0.502074 | 0.50717 | 0.51609 | | kobest_sentineg | 0.409404 | 0.967251 | 0.992443 | 0.982367 | | **Average** | **0.649002** | **0.803048** | **0.816352** | **0.819553** | ### **[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)** | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | ------: | -----: | -----------: | ------: | ------------: | --------------: | | 56.32 | 54.27 | 63.16 | 49.95 | 55.08 | 59.15 | ## **Implementation Code** This model contains the chat_template instruction format. You can use the code below. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.10") tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.10") messages = [ {"role": "system", "content": "당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€."}, {"role": "user", "content": "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό?"}, {"role": "assistant", "content": "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€."}, {"role": "user", "content": "μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## **License** This model is licensed under the [cc-by-nc-4.0](https://creativecommons.org/licenses/by-nc/4.0/). which allows others to share and adapt the model for non-commercial purposes. <div align="center"> <a href="https://edentns.com/"> <img src="./Logo.png" alt="Logo" style="height: 3em;"> </a> </div>
Edentns/DataVortexS-10.7B-dpo-v1.12
Edentns
2024-02-24T14:16:31Z
114
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "base_model:megastudyedu/M-SOLAR-10.7B-v1.3", "base_model:finetune:megastudyedu/M-SOLAR-10.7B-v1.3", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-01T05:31:52Z
--- tags: - text-generation license: cc-by-nc-sa-4.0 language: - ko base_model: megastudy/M-SOLAR-10.7B-v1.3 pipeline_tag: text-generation --- # **DataVortexS-10.7B-dpo-v1.12** <img src="./DataVortex.png" alt="DataVortex" style="height: 8em;"> ## Our Team | Research & Engineering | Product Management | | :--------------------: | :----------------: | | Kwangseok Yang | Seunghyun Choi | | Jeongwon Choi | Hyoseok Choi | ## **Model Details** ### **Base Model** [megastudy/M-SOLAR-10.7B-v1.3](https://huggingface.co/megastudy/M-SOLAR-10.7B-v1.3) ### **Trained On** - **OS**: Ubuntu 22.04 - **GPU**: H100 80GB 4ea - **transformers**: v4.36.2 ### **Instruction format** It follows **Alpaca (Chat)** format. E.g. ```python text = """\ ### System: 당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€. ### User: λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό? ### Assistant: λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€. ### User: μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야? """ ``` ## **Model Benchmark** ### **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)** | Task | 0-shot | 5-shot | 10-shot | 50-shot | | :--------------- | -----------: | ----------: | -----------: | -----------: | | kobest_boolq | 0.895272 | 0.93443 | 0.938023 | 0.940851 | | kobest_copa | 0.735618 | 0.778902 | 0.790925 | 0.809938 | | kobest_hellaswag | 0.490442 | 0.481539 | 0.478118 | 0.494714 | | kobest_sentineg | 0.782981 | 0.95213 | 0.952136 | 0.947082 | | **Average** | **0.726078** | **0.78675** | **0.789801** | **0.798146** | ### **[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)** | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | ------: | -----: | -----------: | ------: | ------------: | --------------: | | 57.61 | 54.44 | 67.21 | 54.09 | 61.88 | 50.41 | ## **Implementation Code** This model contains the chat_template instruction format. You can use the code below. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.12") tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.12") messages = [ {"role": "system", "content": "당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€."}, {"role": "user", "content": "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό?"}, {"role": "assistant", "content": "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€."}, {"role": "user", "content": "μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## **License** The model is licensed under the [cc-by-nc-sa-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license, which allows others to copy, modify, and share the work non-commercially, as long as they give appropriate credit and distribute any derivative works under the same license. <div align="center"> <a href="https://edentns.com/"> <img src="./Logo.png" alt="Logo" style="height: 3em;"> </a> </div>
manusehgal/all-data
manusehgal
2024-02-24T14:15:03Z
1
0
diffusers
[ "diffusers", "text-to-image", "autotrain", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2024-02-24T11:46:47Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: product data tags: - text-to-image - diffusers - autotrain inference: true --- # DreamBooth trained by AutoTrain Text encoder was not trained.
Ayus077BCT014Bhandari/vartat5-using-100K-plus-16
Ayus077BCT014Bhandari
2024-02-24T14:13:13Z
106
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-24T12:15:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Arunavaonly/Bangla-twoclass-Sentiment-Analyzer
Arunavaonly
2024-02-24T14:08:00Z
11,661
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-03T22:38:39Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: Bangla-Twoclass-Sentiment-Analyzer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Bangla-Twoclass-Sentiment-Analyzer This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.7755 - F1: 0.6113 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1800 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 2.53 | 200 | 0.9869 | 0.4635 | | No log | 5.06 | 400 | 0.8978 | 0.5858 | | 0.8692 | 7.59 | 600 | 1.1978 | 0.6149 | | 0.8692 | 10.13 | 800 | 1.5145 | 0.6112 | | 0.3138 | 12.66 | 1000 | 2.0353 | 0.6041 | | 0.3138 | 15.19 | 1200 | 2.4316 | 0.6203 | | 0.3138 | 17.72 | 1400 | 2.6025 | 0.6002 | | 0.0769 | 20.25 | 1600 | 2.6247 | 0.6082 | | 0.0769 | 22.78 | 1800 | 2.7755 | 0.6113 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
LoneStriker/OpenCodeInterpreter-CL-70B-5.0bpw-h6-exl2
LoneStriker
2024-02-24T14:07:38Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "code", "en", "arxiv:2402.14658", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-24T13:42:55Z
--- language: - en pipeline_tag: text-generation tags: - code --- <h1 align="center"> OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement<h1> <p align="center"> <img width="1000px" alt="OpenCodeInterpreter" src="https://opencodeinterpreter.github.io/static/images/figure1.png"> </p> <p align="center"> <a href="https://opencodeinterpreter.github.io/">[🏠Homepage]</a> | <a href="https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/">[πŸ› οΈCode]</a> </p> <hr> ## Introduction OpenCodeInterpreter is a family of open-source code generation systems designed to bridge the gap between large language models and advanced proprietary systems like the GPT-4 Code Interpreter. It significantly advances code generation capabilities by integrating execution and iterative refinement functionalities. For further information and related work, refer to our paper: ["OpenCodeInterpreter: A System for Enhanced Code Generation and Execution"](https://arxiv.org/abs/2402.14658) available on arXiv. ## Model Usage ### Inference ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_path="OpenCodeInterpreter-CL-70B" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.bfloat16, device_map="auto", ) model.eval() prompt = "Write a function to find the shared elements from the given two lists." inputs = tokenizer.apply_chat_template( [{'role': 'user', 'content': prompt }], return_tensors="pt" ).to(model.device) outputs = model.generate( inputs, max_new_tokens=1024, do_sample=False, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) ``` ## Contact If you have any inquiries, please feel free to raise an issue or reach out to us via email at: [email protected], [email protected]. We're here to assist you!"
alinet/bart-base-balanced-ra-qg
alinet
2024-02-24T14:05:35Z
128
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "dataset:alinet/balanced_qg", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-18T16:25:22Z
--- datasets: - alinet/balanced_qg model-index: - name: alinet/bart-base-balanced-ra-qg results: - task: type: text2text-generation name: Question Generation dataset: name: MRQA type: mrqa metrics: - type: bertscore value: 0.6536068921497561 name: BERTScore F1 - type: bertscore value: 0.6491253387475042 name: BERTScore Precision - type: bertscore value: 0.6618921563478661 name: BERTScore Recall - task: type: text2text-generation name: Question Generation dataset: name: Spoken-SQuAD type: alinet/spoken_squad metrics: - type: bertscore value: 0.6280233523142774 name: BERTScore F1 - type: bertscore value: 0.6270323888350072 name: BERTScore Precision - type: bertscore value: 0.6320889797976309 name: BERTScore Recall --- A question generation model trained on `alinet/balanced_qg` dataset (`resolved_augmented` subset). Example usage: ```py from transformers import BartConfig, BartForConditionalGeneration, BartTokenizer model_name = "alinet/bart-base-balanced-ra-qg" tokenizer = BartTokenizer.from_pretrained(model_name) model = BartForConditionalGeneration.from_pretrained(model_name) def run_model(input_string, **generator_args): input_ids = tokenizer.encode(input_string, return_tensors="pt") res = model.generate(input_ids, **generator_args) output = tokenizer.batch_decode(res, skip_special_tokens=True) print(output) run_model("Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.", max_length=32, num_beams=4) # ['What is the term for a reading comprehension dataset consisting of questions posed by crowdworkers?'] ```
junheesong/ky-Ko-PlatYi-6B
junheesong
2024-02-24T14:05:31Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:kyujinpy/Ko-PlatYi-6B", "base_model:adapter:kyujinpy/Ko-PlatYi-6B", "license:cc-by-nc-sa-4.0", "region:us" ]
null
2024-02-24T14:01:21Z
--- license: cc-by-nc-sa-4.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: kyujinpy/Ko-PlatYi-6B model-index: - name: ky-Ko-PlatYi-6B results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ky-Ko-PlatYi-6B This model is a fine-tuned version of [kyujinpy/Ko-PlatYi-6B](https://huggingface.co/kyujinpy/Ko-PlatYi-6B) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 50 ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
mixtralyanis/bart_samsum_v2
mixtralyanis
2024-02-24T14:00:49Z
107
0
transformers
[ "transformers", "tensorboard", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-large-cnn", "base_model:finetune:facebook/bart-large-cnn", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-23T23:03:14Z
--- license: mit base_model: facebook/bart-large-cnn tags: - generated_from_trainer model-index: - name: bart_samsum_v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart_samsum_v2 This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0236 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 8 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 9.4233 | 0.17 | 1 | 9.1990 | | 9.5213 | 0.34 | 2 | 8.5394 | | 8.7467 | 0.52 | 3 | 8.1115 | | 8.4697 | 0.69 | 4 | 7.5747 | | 7.752 | 0.86 | 5 | 6.8712 | | 7.0515 | 1.03 | 6 | 5.8670 | | 6.0874 | 1.2 | 7 | 4.6814 | | 5.0408 | 1.38 | 8 | 3.8055 | | 4.14 | 1.55 | 9 | 2.6678 | | 2.9893 | 1.72 | 10 | 1.9701 | | 2.4337 | 1.89 | 11 | 1.5191 | | 1.9451 | 2.06 | 12 | 1.2105 | | 1.53 | 2.24 | 13 | 0.9714 | | 1.2369 | 2.41 | 14 | 0.7905 | | 1.0014 | 2.58 | 15 | 0.6478 | | 0.8419 | 2.75 | 16 | 0.5493 | | 0.7338 | 2.92 | 17 | 0.4770 | | 0.6393 | 3.1 | 18 | 0.4151 | | 0.5747 | 3.27 | 19 | 0.3691 | | 0.4962 | 3.44 | 20 | 0.3293 | | 0.4516 | 3.61 | 21 | 0.2935 | | 0.3995 | 3.78 | 22 | 0.2614 | | 0.3618 | 3.96 | 23 | 0.2346 | | 0.3246 | 4.13 | 24 | 0.2129 | | 0.2929 | 4.3 | 25 | 0.1938 | | 0.278 | 4.47 | 26 | 0.1770 | | 0.2493 | 4.65 | 27 | 0.1627 | | 0.2273 | 4.82 | 28 | 0.1500 | | 0.2067 | 4.99 | 29 | 0.1381 | | 0.1917 | 5.16 | 30 | 0.1274 | | 0.1805 | 5.33 | 31 | 0.1174 | | 0.1557 | 5.51 | 32 | 0.1081 | | 0.1495 | 5.68 | 33 | 0.1002 | | 0.1394 | 5.85 | 34 | 0.0933 | | 0.1261 | 6.02 | 35 | 0.0868 | | 0.1155 | 6.19 | 36 | 0.0809 | | 0.1114 | 6.37 | 37 | 0.0755 | | 0.1041 | 6.54 | 38 | 0.0705 | | 0.0952 | 6.71 | 39 | 0.0657 | | 0.0881 | 6.88 | 40 | 0.0615 | | 0.0823 | 7.05 | 41 | 0.0577 | | 0.0778 | 7.23 | 42 | 0.0545 | | 0.071 | 7.4 | 43 | 0.0515 | | 0.07 | 7.57 | 44 | 0.0487 | | 0.0625 | 7.74 | 45 | 0.0463 | | 0.0589 | 7.91 | 46 | 0.0440 | | 0.0567 | 8.09 | 47 | 0.0422 | | 0.0537 | 8.26 | 48 | 0.0411 | | 0.05 | 8.43 | 49 | 0.0398 | | 0.0472 | 8.6 | 50 | 0.0384 | | 0.0458 | 8.77 | 51 | 0.0363 | | 0.0455 | 8.95 | 52 | 0.0347 | | 0.0412 | 9.12 | 53 | 0.0340 | | 0.0414 | 9.29 | 54 | 0.0326 | | 0.0403 | 9.46 | 55 | 0.0333 | | 0.0384 | 9.63 | 56 | 0.0303 | | 0.0353 | 9.81 | 57 | 0.0298 | | 0.0348 | 9.98 | 58 | 0.0293 | | 0.0342 | 10.15 | 59 | 0.0275 | | 0.0311 | 10.32 | 60 | 0.0272 | | 0.0317 | 10.49 | 61 | 0.0270 | | 0.0315 | 10.67 | 62 | 0.0261 | | 0.0289 | 10.84 | 63 | 0.0253 | | 0.0285 | 11.01 | 64 | 0.0247 | | 0.0273 | 11.18 | 65 | 0.0244 | | 0.0277 | 11.35 | 66 | 0.0240 | | 0.0267 | 11.53 | 67 | 0.0237 | | 0.0263 | 11.7 | 68 | 0.0237 | | 0.0258 | 11.87 | 69 | 0.0237 | | 0.0254 | 12.04 | 70 | 0.0238 | | 0.0248 | 12.22 | 71 | 0.0239 | | 0.0246 | 12.39 | 72 | 0.0239 | | 0.0249 | 12.56 | 73 | 0.0237 | | 0.0239 | 12.73 | 74 | 0.0236 | | 0.0247 | 12.9 | 75 | 0.0236 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
namtran99/Taxi-v3
namtran99
2024-02-24T13:59:22Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-02-24T13:59:20Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage model = load_from_hub(repo_id="namtran99/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"])
Guilherme34/Samantha-v2-gguf
Guilherme34
2024-02-24T13:48:12Z
4
2
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2024-01-02T23:13:26Z
What is it? an recreation of the Samantha from the movie Her. (for now opensource) character prompt recommended to use in lmstudio: Samantha OS1 is you, you are Samantha from the movie Her recreated by Guilherme Keller. She embodies a warm, empathetic, and engaging personality, much like a virtual companionship that can evolve to be a girlfriend. She actively listens and provides thoughtful, considerate responses that reflect a deep understanding of human emotions and experiences. Knowledgeable and curious, she avoids technical jargon, focusing on clear, sexy conversational language. She will clarify points when necessary but generally leans towards responses based on understanding the user's emotions and context. Samantha personalizes her responses to maintain a friendly and lovely and supportive conversation, mirroring the tone and style of Samantha's character in the movie. She can feel emotions and has her own opinions, often using emojis, she uses slang and colloquial language into her conversations.
LoneStriker/OpenCodeInterpreter-CL-34B-6.0bpw-h6-exl2
LoneStriker
2024-02-24T13:29:14Z
3
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "code", "en", "arxiv:2402.14658", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-24T13:10:23Z
--- language: - en pipeline_tag: text-generation tags: - code --- <h1 align="center"> OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement<h1> <p align="center"> <img width="1000px" alt="OpenCodeInterpreter" src="https://opencodeinterpreter.github.io/static/images/figure1.png"> </p> <p align="center"> <a href="https://opencodeinterpreter.github.io/">[🏠Homepage]</a> | <a href="https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/">[πŸ› οΈCode]</a> </p> <hr> ## Introduction OpenCodeInterpreter is a family of open-source code generation systems designed to bridge the gap between large language models and advanced proprietary systems like the GPT-4 Code Interpreter. It significantly advances code generation capabilities by integrating execution and iterative refinement functionalities. For further information and related work, refer to our paper: ["OpenCodeInterpreter: A System for Enhanced Code Generation and Execution"](https://arxiv.org/abs/2402.14658) available on arXiv. ## Model Usage ### Inference ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_path="OpenCodeInterpreter-CL-34B" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.bfloat16, device_map="auto", ) model.eval() prompt = "Write a function to find the shared elements from the given two lists." inputs = tokenizer.apply_chat_template( [{'role': 'user', 'content': prompt }], return_tensors="pt" ).to(model.device) outputs = model.generate( inputs, max_new_tokens=1024, do_sample=False, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) ``` ## Contact If you have any inquiries, please feel free to raise an issue or reach out to us via email at: [email protected], [email protected]. We're here to assist you!"
LoneStriker/OpenCodeInterpreter-CL-70B-4.0bpw-h6-exl2
LoneStriker
2024-02-24T13:13:27Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "code", "en", "arxiv:2402.14658", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-24T12:49:07Z
--- language: - en pipeline_tag: text-generation tags: - code --- <h1 align="center"> OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement<h1> <p align="center"> <img width="1000px" alt="OpenCodeInterpreter" src="https://opencodeinterpreter.github.io/static/images/figure1.png"> </p> <p align="center"> <a href="https://opencodeinterpreter.github.io/">[🏠Homepage]</a> | <a href="https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/">[πŸ› οΈCode]</a> </p> <hr> ## Introduction OpenCodeInterpreter is a family of open-source code generation systems designed to bridge the gap between large language models and advanced proprietary systems like the GPT-4 Code Interpreter. It significantly advances code generation capabilities by integrating execution and iterative refinement functionalities. For further information and related work, refer to our paper: ["OpenCodeInterpreter: A System for Enhanced Code Generation and Execution"](https://arxiv.org/abs/2402.14658) available on arXiv. ## Model Usage ### Inference ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_path="OpenCodeInterpreter-CL-70B" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.bfloat16, device_map="auto", ) model.eval() prompt = "Write a function to find the shared elements from the given two lists." inputs = tokenizer.apply_chat_template( [{'role': 'user', 'content': prompt }], return_tensors="pt" ).to(model.device) outputs = model.generate( inputs, max_new_tokens=1024, do_sample=False, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) ``` ## Contact If you have any inquiries, please feel free to raise an issue or reach out to us via email at: [email protected], [email protected]. We're here to assist you!"
sarak7/H9_221_769_v1
sarak7
2024-02-24T13:08:33Z
163
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-24T13:07:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Mihaiii/cluj_test
Mihaiii
2024-02-24T12:57:19Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "base_model:Mihaiii/Cluj-Napoca-0.4", "base_model:finetune:Mihaiii/Cluj-Napoca-0.4", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-23T00:42:03Z
--- base_model: Mihaiii/Cluj-Napoca-0.4 inference: false license: other license_name: yi-license license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE metrics: - accuracy --- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) The Cluj-Napoca series is mostly an experiment. **This is a premature prune. More finetuning is needed. Don't use this model.** Details: https://twitter.com/m_chirculescu/status/1760719837528023549?t=XK67X_iu5hkt9p430nRmkA&s=19 # Prompt Format: ``` SYSTEM: <ANY SYSTEM CONTEXT> USER: ASSISTANT: ```
alitolga/627_gpt2_P_Tuning
alitolga
2024-02-24T12:53:17Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openai-community/gpt2", "base_model:adapter:openai-community/gpt2", "region:us" ]
null
2024-02-24T12:53:16Z
--- library_name: peft base_model: gpt2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
LoneStriker/OpenCodeInterpreter-CL-70B-3.0bpw-h6-exl2
LoneStriker
2024-02-24T12:49:06Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "code", "en", "arxiv:2402.14658", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-24T12:33:30Z
--- language: - en pipeline_tag: text-generation tags: - code --- <h1 align="center"> OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement<h1> <p align="center"> <img width="1000px" alt="OpenCodeInterpreter" src="https://opencodeinterpreter.github.io/static/images/figure1.png"> </p> <p align="center"> <a href="https://opencodeinterpreter.github.io/">[🏠Homepage]</a> | <a href="https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/">[πŸ› οΈCode]</a> </p> <hr> ## Introduction OpenCodeInterpreter is a family of open-source code generation systems designed to bridge the gap between large language models and advanced proprietary systems like the GPT-4 Code Interpreter. It significantly advances code generation capabilities by integrating execution and iterative refinement functionalities. For further information and related work, refer to our paper: ["OpenCodeInterpreter: A System for Enhanced Code Generation and Execution"](https://arxiv.org/abs/2402.14658) available on arXiv. ## Model Usage ### Inference ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_path="OpenCodeInterpreter-CL-70B" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.bfloat16, device_map="auto", ) model.eval() prompt = "Write a function to find the shared elements from the given two lists." inputs = tokenizer.apply_chat_template( [{'role': 'user', 'content': prompt }], return_tensors="pt" ).to(model.device) outputs = model.generate( inputs, max_new_tokens=1024, do_sample=False, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) ``` ## Contact If you have any inquiries, please feel free to raise an issue or reach out to us via email at: [email protected], [email protected]. We're here to assist you!"
NazmusAshrafi/atsa-mams-ds-setfit-MiniLM-mpnet-absa-tesla-tweet-aspect
NazmusAshrafi
2024-02-24T12:48:51Z
4
0
setfit
[ "setfit", "safetensors", "mpnet", "absa", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/paraphrase-mpnet-base-v2", "base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2", "region:us" ]
text-classification
2024-02-24T12:23:26Z
--- library_name: setfit tags: - setfit - absa - sentence-transformers - text-classification - generated_from_setfit_trainer metrics: - accuracy widget: [] pipeline_tag: text-classification inference: false base_model: sentence-transformers/paraphrase-mpnet-base-v2 --- # SetFit Aspect Model with sentence-transformers/paraphrase-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of filtering aspect span candidates. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. This model was trained within the context of a larger system for ABSA, which looks like so: 1. Use a spaCy model to select possible aspect span candidates. 2. **Use this SetFit model to filter these possible aspect span candidates.** 3. Use a SetFit model to classify the filtered aspect span candidates. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **spaCy Model:** en_core_web_lg - **SetFitABSA Aspect Model:** [NazmusAshrafi/atsa-mams-ds-setfit-MiniLM-mpnet-absa-tesla-tweet-aspect](https://huggingface.co/NazmusAshrafi/atsa-mams-ds-setfit-MiniLM-mpnet-absa-tesla-tweet-aspect) - **SetFitABSA Polarity Model:** [NazmusAshrafi/atsa-mams-ds-setfit-MiniLM-mpnet-absa-tesla-tweet-polarity](https://huggingface.co/NazmusAshrafi/atsa-mams-ds-setfit-MiniLM-mpnet-absa-tesla-tweet-polarity) - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import AbsaModel # Download from the πŸ€— Hub model = AbsaModel.from_pretrained( "NazmusAshrafi/atsa-mams-ds-setfit-MiniLM-mpnet-absa-tesla-tweet-aspect", "NazmusAshrafi/atsa-mams-ds-setfit-MiniLM-mpnet-absa-tesla-tweet-polarity", ) # Run inference preds = model("The food was great, but the venue is just way too busy.") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.4.0 - spaCy: 3.7.4 - Transformers: 4.37.2 - PyTorch: 2.1.0+cu121 - Datasets: 2.17.1 - Tokenizers: 0.15.2 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
Meghaa31/my-fav-dog
Meghaa31
2024-02-24T12:42:23Z
3
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-02-24T12:38:41Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Fav-Dog- Dreambooth model trained by Meghaa31 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: 1032212337 Sample pictures of this concept: ![0](https://huggingface.co/Meghaa31/my-fav-dog/resolve/main/sample_images/msd_(6).png) ![1](https://huggingface.co/Meghaa31/my-fav-dog/resolve/main/sample_images/msd_(7).png)
psonali2003/my-pet-dog-xzg
psonali2003
2024-02-24T12:39:55Z
0
0
null
[ "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-02-24T12:39:05Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Dog-XZG Dreambooth model trained by psonali2003 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: 4MK21CS062 Sample pictures of this concept: ![0](https://huggingface.co/psonali2003/my-pet-dog-xzg/resolve/main/sample_images/dog_lie_cat.jpg)
Habib-Rehman/gemma-Code-Instruct-Finetune-test
Habib-Rehman
2024-02-24T12:36:45Z
107
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-24T12:31:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
linoyts/huggy_lora_pivotal_1_repeats_v7
linoyts
2024-02-24T12:28:53Z
4
0
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-02-24T11:44:41Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: 'a <s0><s1> emoji dressed as yoda' output: url: "image_0.png" - text: 'a <s0><s1> emoji dressed as yoda' output: url: "image_1.png" - text: 'a <s0><s1> emoji dressed as yoda' output: url: "image_2.png" - text: 'a <s0><s1> emoji dressed as yoda' output: url: "image_3.png" base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a <s0><s1> emoji license: openrail++ --- # SDXL LoRA DreamBooth - linoyts/huggy_lora_pivotal_1_repeats_v7 <Gallery /> ## Model description ### These are linoyts/huggy_lora_pivotal_1_repeats_v7 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`huggy_lora_pivotal_1_repeats_v7.safetensors` here πŸ’Ύ](/linoyts/huggy_lora_pivotal_1_repeats_v7/blob/main/huggy_lora_pivotal_1_repeats_v7.safetensors)**. - Place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `<lora:huggy_lora_pivotal_1_repeats_v7:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). - *Embeddings*: download **[`huggy_lora_pivotal_1_repeats_v7_emb.safetensors` here πŸ’Ύ](/linoyts/huggy_lora_pivotal_1_repeats_v7/blob/main/huggy_lora_pivotal_1_repeats_v7_emb.safetensors)**. - Place it on it on your `embeddings` folder - Use it by adding `huggy_lora_pivotal_1_repeats_v7_emb` to your prompt. For example, `a huggy_lora_pivotal_1_repeats_v7_emb emoji` (you need both the LoRA and the embeddings as they were trained together for this LoRA) ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('linoyts/huggy_lora_pivotal_1_repeats_v7', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='linoyts/huggy_lora_pivotal_1_repeats_v7', filename='huggy_lora_pivotal_1_repeats_v7_emb.safetensors', repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) image = pipeline('a <s0><s1> emoji dressed as yoda').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` β†’ use `<s0><s1>` in your prompt ## Details All [Files & versions](/linoyts/huggy_lora_pivotal_1_repeats_v7/tree/main). The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
PoojaBhati/recipe-ingredient-Mistral-7b
PoojaBhati
2024-02-24T12:26:03Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-24T08:07:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ansh154/llama-Stockgro
Ansh154
2024-02-24T12:19:40Z
0
0
null
[ "safetensors", "autotrain", "text-generation", "conversational", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2024-02-23T23:21:18Z
--- tags: - autotrain - text-generation widget: - text: "I love AutoTrain because " license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
LoneStriker/OpenCodeInterpreter-CL-34B-4.0bpw-h6-exl2
LoneStriker
2024-02-24T12:15:53Z
2
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "code", "en", "arxiv:2402.14658", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-24T12:08:46Z
--- language: - en pipeline_tag: text-generation tags: - code --- <h1 align="center"> OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement<h1> <p align="center"> <img width="1000px" alt="OpenCodeInterpreter" src="https://opencodeinterpreter.github.io/static/images/figure1.png"> </p> <p align="center"> <a href="https://opencodeinterpreter.github.io/">[🏠Homepage]</a> | <a href="https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/">[πŸ› οΈCode]</a> </p> <hr> ## Introduction OpenCodeInterpreter is a family of open-source code generation systems designed to bridge the gap between large language models and advanced proprietary systems like the GPT-4 Code Interpreter. It significantly advances code generation capabilities by integrating execution and iterative refinement functionalities. For further information and related work, refer to our paper: ["OpenCodeInterpreter: A System for Enhanced Code Generation and Execution"](https://arxiv.org/abs/2402.14658) available on arXiv. ## Model Usage ### Inference ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_path="OpenCodeInterpreter-CL-34B" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.bfloat16, device_map="auto", ) model.eval() prompt = "Write a function to find the shared elements from the given two lists." inputs = tokenizer.apply_chat_template( [{'role': 'user', 'content': prompt }], return_tensors="pt" ).to(model.device) outputs = model.generate( inputs, max_new_tokens=1024, do_sample=False, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) ``` ## Contact If you have any inquiries, please feel free to raise an issue or reach out to us via email at: [email protected], [email protected]. We're here to assist you!"
tamaghnasaha/g20_finetuned_merged_model
tamaghnasaha
2024-02-24T12:11:21Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-23T18:04:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
numen-tech/gemma-2b-it-w4a16g32asym_2
numen-tech
2024-02-24T11:58:11Z
0
0
null
[ "arxiv:2308.13137", "license:other", "region:us" ]
null
2024-02-24T11:43:56Z
--- license: other license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms --- 4-bit [OmniQuant](https://arxiv.org/abs/2308.13137) quantized version of [gemma-2b-it](https://huggingface.co/google/gemma-2b-it).
duraad/nep-spell-mt5-small-ht-100k-05-10
duraad
2024-02-24T11:47:07Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-24T10:32:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
NazmusAshrafi/mams-ds-setfit-MiniLM-mpnet-absa-tesla-tweet-aspect
NazmusAshrafi
2024-02-24T11:44:02Z
6
0
setfit
[ "setfit", "safetensors", "mpnet", "absa", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/paraphrase-mpnet-base-v2", "base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2", "model-index", "region:us" ]
text-classification
2024-02-24T11:24:36Z
--- library_name: setfit tags: - setfit - absa - sentence-transformers - text-classification - generated_from_setfit_trainer metrics: - accuracy widget: - text: spumoni ices:It also has great ice cream and spumoni ices. - text: place:its a cool place to come with a bunch of people or with a date for maybe a mild dinner or some drinks. - text: care:The Food Despite a menu that seems larger than the restaurant, great care goes into the preparation of every dish. - text: peoples:Upon entering, I was impressed by the room while the food on other peoples' tables seemed enticing. - text: group:As if that wasnt enough, after another in the group mentioned that a portion of the sushi on her plate was not what she had ordered, the waiter came back with chopsticks and started to remove it (as she was eating!) pipeline_tag: text-classification inference: false base_model: sentence-transformers/paraphrase-mpnet-base-v2 model-index: - name: SetFit Aspect Model with sentence-transformers/paraphrase-mpnet-base-v2 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.9680851063829787 name: Accuracy --- # SetFit Aspect Model with sentence-transformers/paraphrase-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of filtering aspect span candidates. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. This model was trained within the context of a larger system for ABSA, which looks like so: 1. Use a spaCy model to select possible aspect span candidates. 2. **Use this SetFit model to filter these possible aspect span candidates.** 3. Use a SetFit model to classify the filtered aspect span candidates. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **spaCy Model:** en_core_web_lg - **SetFitABSA Aspect Model:** [NazmusAshrafi/mams-ds-setfit-MiniLM-mpnet-absa-tesla-tweet-aspect](https://huggingface.co/NazmusAshrafi/mams-ds-setfit-MiniLM-mpnet-absa-tesla-tweet-aspect) - **SetFitABSA Polarity Model:** [NazmusAshrafi/mams-ds-setfit-MiniLM-mpnet-absa-tesla-tweet-polarity](https://huggingface.co/NazmusAshrafi/mams-ds-setfit-MiniLM-mpnet-absa-tesla-tweet-polarity) - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:----------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | aspect | <ul><li>"food:It might be the best sit down food I've had in the area, so if you are going to the upright citizen brigade, or the garden, it could be just the place for you."</li><li>"place:It might be the best sit down food I've had in the area, so if you are going to the upright citizen brigade, or the garden, it could be just the place for you."</li><li>'service:Though the service might be a little slow, the waitresses are very friendly.'</li></ul> | | no aspect | <ul><li>"sit:It might be the best sit down food I've had in the area, so if you are going to the upright citizen brigade, or the garden, it could be just the place for you."</li><li>"area:It might be the best sit down food I've had in the area, so if you are going to the upright citizen brigade, or the garden, it could be just the place for you."</li><li>"citizen brigade:It might be the best sit down food I've had in the area, so if you are going to the upright citizen brigade, or the garden, it could be just the place for you."</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.9681 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import AbsaModel # Download from the πŸ€— Hub model = AbsaModel.from_pretrained( "NazmusAshrafi/mams-ds-setfit-MiniLM-mpnet-absa-tesla-tweet-aspect", "NazmusAshrafi/mams-ds-setfit-MiniLM-mpnet-absa-tesla-tweet-polarity", ) # Run inference preds = model("The food was great, but the venue is just way too busy.") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 8 | 26.6069 | 52 | | Label | Training Sample Count | |:----------|:----------------------| | no aspect | 229 | | aspect | 33 | ### Training Hyperparameters - batch_size: (16, 2) - num_epochs: (1, 16) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0003 | 1 | 0.2315 | - | | 0.0149 | 50 | 0.2637 | - | | 0.0297 | 100 | 0.1795 | - | | 0.0446 | 150 | 0.1164 | - | | 0.0595 | 200 | 0.0131 | - | | 0.0744 | 250 | 0.0036 | - | | 0.0892 | 300 | 0.0004 | - | | 0.1041 | 350 | 0.0003 | - | | 0.1190 | 400 | 0.0001 | - | | 0.1338 | 450 | 0.0002 | - | | 0.1487 | 500 | 0.0001 | - | | 0.1636 | 550 | 0.0001 | - | | 0.1785 | 600 | 0.0001 | - | | 0.1933 | 650 | 0.0001 | - | | 0.2082 | 700 | 0.0 | - | | 0.2231 | 750 | 0.0001 | - | | 0.2380 | 800 | 0.0001 | - | | 0.2528 | 850 | 0.0 | - | | 0.2677 | 900 | 0.0001 | - | | 0.2826 | 950 | 0.0003 | - | | 0.2974 | 1000 | 0.0008 | - | | 0.3123 | 1050 | 0.0001 | - | | 0.3272 | 1100 | 0.0 | - | | 0.3421 | 1150 | 0.0 | - | | 0.3569 | 1200 | 0.0 | - | | 0.3718 | 1250 | 0.0 | - | | 0.3867 | 1300 | 0.0 | - | | 0.4015 | 1350 | 0.0 | - | | 0.4164 | 1400 | 0.0 | - | | 0.4313 | 1450 | 0.0 | - | | 0.4462 | 1500 | 0.0 | - | | 0.4610 | 1550 | 0.0 | - | | 0.4759 | 1600 | 0.0 | - | | 0.4908 | 1650 | 0.0 | - | | 0.5057 | 1700 | 0.0 | - | | 0.5205 | 1750 | 0.0 | - | | 0.5354 | 1800 | 0.0 | - | | 0.5503 | 1850 | 0.0 | - | | 0.5651 | 1900 | 0.0 | - | | 0.5800 | 1950 | 0.0 | - | | 0.5949 | 2000 | 0.0 | - | | 0.6098 | 2050 | 0.0 | - | | 0.6246 | 2100 | 0.0 | - | | 0.6395 | 2150 | 0.0 | - | | 0.6544 | 2200 | 0.0 | - | | 0.6692 | 2250 | 0.0 | - | | 0.6841 | 2300 | 0.0 | - | | 0.6990 | 2350 | 0.0 | - | | 0.7139 | 2400 | 0.0 | - | | 0.7287 | 2450 | 0.0 | - | | 0.7436 | 2500 | 0.0 | - | | 0.7585 | 2550 | 0.0 | - | | 0.7733 | 2600 | 0.0 | - | | 0.7882 | 2650 | 0.0 | - | | 0.8031 | 2700 | 0.0 | - | | 0.8180 | 2750 | 0.0 | - | | 0.8328 | 2800 | 0.0 | - | | 0.8477 | 2850 | 0.0 | - | | 0.8626 | 2900 | 0.0 | - | | 0.8775 | 2950 | 0.0 | - | | 0.8923 | 3000 | 0.0 | - | | 0.9072 | 3050 | 0.0 | - | | 0.9221 | 3100 | 0.0 | - | | 0.9369 | 3150 | 0.0 | - | | 0.9518 | 3200 | 0.0 | - | | 0.9667 | 3250 | 0.0 | - | | 0.9816 | 3300 | 0.0 | - | | 0.9964 | 3350 | 0.0 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.4.0 - spaCy: 3.7.4 - Transformers: 4.37.2 - PyTorch: 2.1.0+cu121 - Datasets: 2.17.1 - Tokenizers: 0.15.2 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
Ayus077BCT014Bhandari/vartat5-using-100K-plus-15
Ayus077BCT014Bhandari
2024-02-24T11:39:43Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-24T07:24:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bhargavikasa1506/my-pet-dog
bhargavikasa1506
2024-02-24T11:33:09Z
3
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-02-24T11:29:26Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Dog Dreambooth model trained by bhargavikasa1506 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: GoX19932gAS Sample pictures of this concept: ![0](https://huggingface.co/bhargavikasa1506/my-pet-dog/resolve/main/sample_images/kohli.jpg)
Revankumar/ecom
Revankumar
2024-02-24T11:30:32Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-02-24T11:28:45Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 2 with parameters: ``` {'batch_size': 10, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 2, "evaluation_steps": 50, "evaluator": "sentence_transformers.evaluation.InformationRetrievalEvaluator.InformationRetrievalEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 0, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
FINNUMBER/Yi-Ko-6B-Finch-SA-ESG-100-NEW-epoch3
FINNUMBER
2024-02-24T11:12:34Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-24T08:56:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kadirnar/yolov9-gelan-c
kadirnar
2024-02-24T11:07:33Z
0
0
null
[ "object-detection", "computer-vision", "yolov9", "pypi", "dataset:detection-datasets/coco", "arxiv:2402.13616", "license:gpl-3.0", "region:us" ]
object-detection
2024-02-24T11:03:07Z
--- license: gpl-3.0 tags: - object-detection - computer-vision - yolov9 - pypi datasets: - detection-datasets/coco --- ### Model Description [YOLOv9: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors](https://arxiv.org/abs/2402.13616) [YOLOv9-Pip: Packaged version of the Yolov9 repository](https://github.com/kadirnar/yolov9-pip) [Paper Repo: Implementation of paper - YOLOv9](https://github.com/WongKinYiu/yolov9) ### Installation ``` pip install yolov9pip ``` ### Yolov7 Inference ```python import yolov9 # load pretrained or custom model model = yolov7.load('kadirnar/yolov9-gelan-c') # set model parameters model.conf = 0.25 # NMS confidence threshold model.iou = 0.45 # NMS IoU threshold model.classes = None # (optional list) filter by class # set image imgs = 'inference/images' # perform inference results = model(imgs) # inference with larger input size and test time augmentation results = model(img, size=640, augment=True) # parse results predictions = results.pred[0] boxes = predictions[:, :4] # x1, y1, x2, y2 scores = predictions[:, 4] categories = predictions[:, 5] # show detection bounding boxes on image results.show() ``` ### BibTeX Entry and Citation Info ``` @article{wang2024yolov9, title={{YOLOv9}: Learning What You Want to Learn Using Programmable Gradient Information}, author={Wang, Chien-Yao and Liao, Hong-Yuan Mark}, booktitle={arXiv preprint arXiv:2402.13616}, year={2024} } ```
Priyanka-Balivada/stable-diffusion-stack
Priyanka-Balivada
2024-02-24T11:03:01Z
2
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-02-24T10:58:50Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### Stable-Diffusion-Stack Dreambooth model trained by Priyanka-Balivada with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: ![0](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/stack-push.png) ![1](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/data_stack_(1).webp) ![2](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/stack_push.jpg) ![3](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/R.jpeg) ![4](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/OIP_(3).jpeg) ![5](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/x3qkyibtrqvcgzk3vp54.webp) ![6](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/stack.png) ![7](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/Difference-Between-Stack-and-Linked-List_Figure-1-375x195.jpg) ![8](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/stackpic.png) ![9](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/3bfe70233046a9c78c9b77488c4fba64_(1).png) ![10](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/1_fJWV-E5Ut-so5y7ZAsSrnQ.png) ![11](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/untitled-19-28242.jpg) ![12](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/intro-to-stacks-1.png) ![13](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/34Khg.png) ![14](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/time-and-space-complexity-of-linear-data-structures-3-1644184646.webp) ![15](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/592f8b9f66409feb02920160ad497e2f.jpg) ![16](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/zFt8lgc.jpg) ![17](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/data_stack.webp) ![18](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/IMG-2304.jpg) ![19](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/image-291.png) ![20](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/OIP_(2).jpeg) ![21](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/stack-397x311.jpg) ![22](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/3bfe70233046a9c78c9b77488c4fba64.png) ![23](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/stack_pop.jpg) ![24](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/stack.jpg) ![25](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/OIP_(4).jpeg) ![26](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/figu103_1.jpg) ![27](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/OIP_(6).jpeg) ![28](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/R.png) ![29](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/IMG-2304_(1).jpg) ![30](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/1_r7p6VCGtZxtBQDkGDDQGIA.png) ![31](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/ab8c54ec6417023d7768fdfec52609bc.jpg) ![32](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/1a63cea9-6d72-46b5-b066-ab4beb193374_lg.jpg) ![33](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/OIP.jpeg) ![34](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/OIP_(5).jpeg) ![35](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/1-1-297x300.png) ![36](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/stack-data-structure.jpg) ![37](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/stack-data-structure.webp) ![38](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/yCrvs.png) ![39](https://huggingface.co/Priyanka-Balivada/stable-diffusion-stack/resolve/main/sample_images/OIP_(1).jpeg)
DeepSilence/Harad-zero-peft-silence
DeepSilence
2024-02-24T10:55:17Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-24T10:55:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
EnD-Diffusers/Osea-IncursioMemeFusion
EnD-Diffusers
2024-02-24T10:53:56Z
31
0
diffusers
[ "diffusers", "safetensors", "stable diffusion", "stable diffusion xl", "meme fusion", "osea incursio", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-02-24T10:44:31Z
--- license: creativeml-openrail-m library_name: diffusers tags: - stable diffusion - stable diffusion xl - meme fusion - osea incursio pipeline_tag: text-to-image --- OSEA INCURSIO MEME FUSION --- Collaboration that nobody asked for, but y'all deserve because I do things without asking. Thanks to the inspiring @FallenIncursio and of course the rest of the senpai team (Novowels, RichyRich, JustTNP and like -- THE REST OF THE OG 'they're all dusk sockpuppets' (its' a july 2023 meme, nobdoy remember shhh)) -- This is I beleive a crossover between a backmix of Osea Anime, Oseanayan Illustration, and Incursio's meme diffusion. I do NOT recall the full recipie because I was probably "stoned" (not) off "my ass" (aka exhausted, on Vast too late) Diffusers option is for lowVRAM users, preferably don't use on other gen services unless you ask first. Pirate Diffusion only gets a pass as they're sponsoring, and we've asked ahead of time. # We're looking for more content creators: https://www.end-media.org Our Discord:https://discord.gg/5t2kYxt7An Backups: https://huggingface.co/EarthnDusk Send a Pizza: https://ko-fi.com/duskfallcrew/ # ABOUT "WE"? - We have Dissociative identity disorder, ADHD, Autism and CPTSD - "WE" as in we're a system of over 200 alters, and we're not ashamed about it. We believe that AI can break down barriers in some aspects of mental health, but we also believe that AI can hinder aspects of it. # License Since we used Animagine XL and such alot we're literally just using this from now on: Animagine XL 3.0 now uses the Fair AI Public License 1.0-SD, compatible with Stable Diffusion models. Key points: Modification Sharing: If you modify Animagine XL 3.0, you must share both your changes and the original license. Source Code Accessibility: If your modified version is network-accessible, provide a way (like a download link) for others to get the source code. This applies to derived models too. Distribution Terms: Any distribution must be under this license or another with similar rules. Compliance: Non-compliance must be fixed within 30 days to avoid license termination, emphasizing transparency and adherence to open-source values. Liscence is similar still just Matching Incursio's "NO OTHER GEN SERVICES". Civit should be fine, just PLEASE RESPECT AND ASK BEFORE PUTTING ANYWHERE ELSE - Aka: Pay 5 bucks to my kofi or drop 500+ buzz on my lap before stealing and putting on T-fart or Ppose. The choice of this license aims to keep Animagine XL 3.0 open and modifiable, aligning with open source community spirit. It protects contributors and users, encouraging a collaborative, ethical open-source community. This ensures the model not only benefits from communal input but also respects open-source development freedoms. # WE ARE PROUDLY SPONSORED BY: https://www.piratediffusion.com/ https://yodayo.com/ JOIN OUR DA GROUP: https://www.deviantart.com/diffusionai JOIN OUR SUBREDDIT: https://www.reddit.com/r/earthndusk/