modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-25 00:52:29
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
575 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-25 00:46:22
card
stringlengths
11
1.01M
mlabonne/Zebrafish-7B
mlabonne
2024-04-01T12:57:38Z
266
14
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "arxiv:2403.19522", "base_model:liminerity/M7-7b", "base_model:finetune:liminerity/M7-7b", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-31T16:16:06Z
--- license: cc-by-nc-4.0 tags: - merge - mergekit - lazymergekit base_model: - liminerity/M7-7b - rwitz/experiment26-truthy-iter-0 --- # Zebrafish-7B Zebrafish-7B is my first model using the new merge method called [Model Stock](https://arxiv.org/abs/2403.19522). Zebrafish-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b) * [rwitz/experiment26-truthy-iter-0](https://huggingface.co/rwitz/experiment26-truthy-iter-0) Special thanks to Charles Goddard for the quick implementation! ## πŸ† Evaluation ### Nous | Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench | |---|---:|---:|---:|---:|---:| | [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B) [πŸ“„](https://gist.github.com/mlabonne/1d33c86824b3a11d2308e36db1ba41c1) | 62.74 | 45.37 | 77.01 | 78.39 | 50.2 | | [**mlabonne/Zebrafish-7B**](https://huggingface.co/mlabonne/Zebrafish-7B) [πŸ“„](https://gist.github.com/mlabonne/719d5e106eefbcffb951b65616dcbec4) | **62.41** | **44.92** | **77.18** | **78.25** | **49.28** | | [mlabonne/Beyonder-4x7B-v3](https://huggingface.co/mlabonne/Beyonder-4x7B-v3) [πŸ“„](https://gist.github.com/mlabonne/3740020807e559f7057c32e85ce42d92) | 61.91 | 45.85 | 76.67 | 74.98 | 50.12 | | [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B) [πŸ“„](https://gist.github.com/mlabonne/ad0c665bbe581c8420136c3b52b3c15c) | 60.25 | 46.06 | 76.77 | 70.32 | 47.86 | | [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) [πŸ“„](https://gist.github.com/mlabonne/05d358e17dffdf9eee7c2322380c9da6) | 54.81 | 38.5 | 71.64 | 66.82 | 42.29 | ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 - model: liminerity/M7-7b - model: rwitz/experiment26-truthy-iter-0 merge_method: model_stock base_model: mistralai/Mistral-7B-v0.1 dtype: bfloat16 ``` ## πŸ’» Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "mlabonne/Zebrafish-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
TheVisitorX/InfinityRP-v1-7B-4.25bpw-exl2
TheVisitorX
2024-04-01T12:56:17Z
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "exl2", "quantized", "roleplay", "4.25bpw", "merge", "nsfw", "en", "base_model:ChaoticNeutrals/Eris_Floramix_DPO_7B", "base_model:merge:ChaoticNeutrals/Eris_Floramix_DPO_7B", "base_model:ResplendentAI/Datura_7B", "base_model:merge:ResplendentAI/Datura_7B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-01T12:50:06Z
--- library_name: transformers license: apache-2.0 language: - en tags: - exl2 - quantized - roleplay - 4.25bpw - mistral - merge - nsfw inference: false base_model: - ResplendentAI/Datura_7B - ChaoticNeutrals/Eris_Floramix_DPO_7B --- This repository hosts exl2 quantization for [Endevor/InfinityRP-v1-7B](https://huggingface.co/Endevor/InfinityRP-v1-7B). **Original model information:** ![waifu/jpeg](https://i.imgur.com/cslLqjd.jpeg) This is an experimental model I currently use. It's far from great as I'm still working on it, but I leave it here for people to try if interested in this format. This model was basically made to stop some upsetting hallucinations, so {{char}} mostly and occasionally will wait {{user}} response instead of responding itself or deciding for {{user}}, also, my primary idea was to create a cozy model that thinks.* Inspired by [lemonilia/Limamono-Mistral-7B-v0.50](https://huggingface.co/lemonilia/Limamono-Mistral-7B-v0.50) ### Style details: - Quotes are used for character dialogs. - `"Hey, Anon... What do you think about my style?"` - Asterisks can be used for narration, but it's optional, it's recommended to use default novel format. - `*Her cheeks blush slightly, she tries to hide.*` - Character thoughts are wrapped with ` marks. **This may often spontaneously occur.** - `My heart skips a beat hearing him call me pretty!` *If you want thoughts to appear more often, just add something like this to your system prompt: ```"{{char}} internal thoughts are wrapped with ` marks."```* - Accepted response lengths: ***tiny, short, medium, long, huge*** - For example: ### Response: (length = medium) Note: Apparently ***humongous***, ***extreme*** and ***unlimited*** may not work at moment. Not fully tested. ### Prompt format: Extended Alpaca, as always. ``"You are now in roleplay chat mode. Engage in an endless chat with {{user}}. Always wait {{user}} turn, next actions and responses."`` ## Example: ![example](https://files.catbox.moe/j0zxov.png)
Chaitanya798800/videomae-large_7class_UCFCrime
Chaitanya798800
2024-04-01T12:51:52Z
6
0
transformers
[ "transformers", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-large", "base_model:finetune:MCG-NJU/videomae-large", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2024-04-01T09:26:01Z
--- license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-large tags: - generated_from_trainer model-index: - name: videomae-large_7class_UCFCrime results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-large_7class_UCFCrime This model is a fine-tuned version of [MCG-NJU/videomae-large](https://huggingface.co/MCG-NJU/videomae-large) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 2.1295 - eval_confusion_matrix: {'confusion_matrix': array([[ 7, 0, 1, 1, 39, 4, 0], [ 47, 42, 1, 0, 110, 43, 0], [ 1, 0, 37, 2, 0, 0, 9], [ 1, 0, 16, 52, 1, 0, 0], [ 4, 0, 1, 1, 107, 22, 7], [ 0, 0, 0, 0, 16, 54, 0], [ 0, 0, 4, 2, 14, 33, 59]])} - eval_runtime: 327.6535 - eval_samples_per_second: 2.252 - eval_steps_per_second: 1.126 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 2875 ### Framework versions - Transformers 4.39.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
pippinnie/distilroberta-base-finetuned-cyber-readme
pippinnie
2024-04-01T12:49:05Z
61
0
transformers
[ "transformers", "tf", "roberta", "fill-mask", "generated_from_keras_callback", "base_model:distilbert/distilroberta-base", "base_model:finetune:distilbert/distilroberta-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-04-01T12:14:45Z
--- license: apache-2.0 base_model: distilroberta-base tags: - generated_from_keras_callback model-index: - name: pippinnie/distilroberta-base-finetuned-cyber-readme results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # pippinnie/distilroberta-base-finetuned-cyber-readme This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.6746 - Validation Loss: 1.5880 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.6746 | 1.5880 | 0 | ### Framework versions - Transformers 4.38.2 - TensorFlow 2.16.1 - Datasets 2.18.0 - Tokenizers 0.15.2
au2a/Llama-2-7b-hf-20240401-2
au2a
2024-04-01T12:48:56Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-01T12:36:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
wxzhang/dpo-selective-mixdata
wxzhang
2024-04-01T12:48:31Z
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "trl", "dpo", "generated_from_trainer", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-31T23:22:23Z
--- tags: - trl - dpo - generated_from_trainer model-index: - name: dpo-selective-mixdata results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dpo-selective-mixdata This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5805 - Rewards/chosen: -4.7292 - Rewards/rejected: -5.2763 - Rewards/accuracies: 0.6934 - Rewards/margins: 0.5471 - Logps/rejected: -654.5243 - Logps/chosen: -590.7578 - Logits/rejected: 6.2956 - Logits/chosen: 6.4467 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.5481 | 0.27 | 500 | 0.6089 | -2.6822 | -3.1236 | 0.6705 | 0.4414 | -439.2521 | -386.0565 | 3.9671 | 4.1604 | | 0.5519 | 0.53 | 1000 | 0.5867 | -4.2523 | -4.7597 | 0.6894 | 0.5074 | -602.8671 | -543.0739 | 5.1974 | 5.3486 | | 0.5597 | 0.8 | 1500 | 0.5821 | -4.7906 | -5.3218 | 0.6959 | 0.5311 | -659.0733 | -596.9037 | 6.4644 | 6.6294 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.0
lockylocks/rl_course_vizdoom_health_gathering_supreme
lockylocks
2024-04-01T12:44:38Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-04-01T12:44:31Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 9.04 +/- 2.93 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r lockylocks/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
samaysk/results
samaysk
2024-04-01T12:44:02Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:samaysk/megaspringllamaft70", "base_model:adapter:samaysk/megaspringllamaft70", "region:us" ]
null
2024-04-01T12:43:57Z
--- library_name: peft tags: - trl - sft - generated_from_trainer base_model: samaysk/megaspringllamaft70 model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [samaysk/megaspringllamaft70](https://huggingface.co/samaysk/megaspringllamaft70) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 70 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.1 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
ostris/fabricated-reality-sdxl
ostris
2024-04-01T12:40:17Z
201
14
diffusers
[ "diffusers", "safetensors", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-04-01T02:48:27Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image --- # Fabricated Reality - SDXL This is an realistic SDXL model I have been finetuning, merging, and tweaking since SDXL came out. I have done a ton of target guidance training to fix eyes, teeth, and hands, as well as many other aspects. I have been planning on releasing it for a long time, but I hate doing all of the writeups and demo images. I decided to just to put it on herfe now, and I will follow up with more info later. To be continued.
alynakbaba/Enlighten_Instruct
alynakbaba
2024-04-01T12:31:49Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2", "region:us" ]
null
2024-04-01T12:31:30Z
--- library_name: peft base_model: mistralai/Mistral-7B-Instruct-v0.2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
arkocharyan/aram-trained-xl-v1
arkocharyan
2024-04-01T12:28:16Z
1
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-04-01T12:08:47Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of sks man widget: - text: A photo of sks man in the office output: url: image_0.png - text: A photo of sks man in the office output: url: image_1.png - text: A photo of sks man in the office output: url: image_2.png - text: A photo of sks man in the office output: url: image_3.png --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - arkocharyan/aram-trained-xl-v1 <Gallery /> ## Model description These are arkocharyan/aram-trained-xl-v1 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sks man to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](arkocharyan/aram-trained-xl-v1/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
ml4algotrading/finetuning-sentiment-model-3000-samples
ml4algotrading
2024-04-01T12:25:02Z
107
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-04-01T12:24:40Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3375 - Accuracy: 0.8767 - F1: 0.8810 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
MaziyarPanahi/Experiment28Strangemerges_32-7B-GGUF
MaziyarPanahi
2024-04-01T12:21:23Z
54
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "base_model:yam-peleg/Experiment28-7B", "base_model:Gille/StrangeMerges_32-7B-slerp", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:automerger/Experiment28Strangemerges_32-7B", "base_model:quantized:automerger/Experiment28Strangemerges_32-7B" ]
text-generation
2024-04-01T11:59:11Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - merge - mergekit - lazymergekit - automerger - base_model:yam-peleg/Experiment28-7B - base_model:Gille/StrangeMerges_32-7B-slerp - license:apache-2.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: Experiment28Strangemerges_32-7B-GGUF base_model: automerger/Experiment28Strangemerges_32-7B inference: false model_creator: automerger pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Experiment28Strangemerges_32-7B-GGUF](https://huggingface.co/MaziyarPanahi/Experiment28Strangemerges_32-7B-GGUF) - Model creator: [automerger](https://huggingface.co/automerger) - Original model: [automerger/Experiment28Strangemerges_32-7B](https://huggingface.co/automerger/Experiment28Strangemerges_32-7B) ## Description [MaziyarPanahi/Experiment28Strangemerges_32-7B-GGUF](https://huggingface.co/MaziyarPanahi/Experiment28Strangemerges_32-7B-GGUF) contains GGUF format model files for [automerger/Experiment28Strangemerges_32-7B](https://huggingface.co/automerger/Experiment28Strangemerges_32-7B). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/Experiment28Strangemerges_32-7B-GGUF](https://huggingface.co/MaziyarPanahi/Experiment28Strangemerges_32-7B-GGUF) and below it, a specific filename to download, such as: Experiment28Strangemerges_32-7B-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/Experiment28Strangemerges_32-7B-GGUF Experiment28Strangemerges_32-7B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/Experiment28Strangemerges_32-7B-GGUF](https://huggingface.co/MaziyarPanahi/Experiment28Strangemerges_32-7B-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/Experiment28Strangemerges_32-7B-GGUF Experiment28Strangemerges_32-7B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Experiment28Strangemerges_32-7B.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://github.com/abetlen/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Experiment28Strangemerges_32-7B.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Experiment28Strangemerges_32-7B.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
ConsistentFactor/PetuniaXL
ConsistentFactor
2024-04-01T12:14:21Z
1
1
diffusers
[ "diffusers", "safetensors", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-03-31T21:11:04Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image ---
VGubaydulin/whisper-medium-korus-terms
VGubaydulin
2024-04-01T12:11:42Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "ru", "dataset:VGubaydulin/KorusTerms", "base_model:openai/whisper-medium", "base_model:finetune:openai/whisper-medium", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-04-01T06:19:02Z
--- language: - ru license: apache-2.0 base_model: openai/whisper-medium tags: - generated_from_trainer datasets: - VGubaydulin/KorusTerms model-index: - name: Whisper Large Ru - Korus Terms results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Large Ru - Korus Terms This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Korus Terms 0.1 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 15 - training_steps: 100 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
habulaj/1711970194969x783900486587989000
habulaj
2024-04-01T12:11:01Z
1
0
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "dataset:Shortyzzzz/TotalDrm", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2024-04-01T11:17:01Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: in the style of TOK tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: false datasets: - Shortyzzzz/TotalDrm --- # LoRA DreamBooth - squaadinc/1711970194969x783900486587989000 These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0 trained on @fffiloni's SD-XL trainer. The weights were trained on the concept prompt: ``` in the style of TOK ``` Use this keyword to trigger your custom model in your prompts. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Usage Make sure to upgrade diffusers to >= 0.19.0: ``` pip install diffusers --upgrade ``` In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark: ``` pip install invisible_watermark transformers accelerate safetensors ``` To just use the base model, you can run: ```python import torch from diffusers import DiffusionPipeline, AutoencoderKL device = "cuda" if torch.cuda.is_available() else "cpu" vae = AutoencoderKL.from_pretrained('madebyollin/sdxl-vae-fp16-fix', torch_dtype=torch.float16) pipe = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", vae=vae, torch_dtype=torch.float16, variant="fp16", use_safetensors=True ) pipe.to(device) # This is where you load your trained weights specific_safetensors = "pytorch_lora_weights.safetensors" lora_scale = 0.9 pipe.load_lora_weights( 'squaadinc/1711970194969x783900486587989000', weight_name = specific_safetensors, # use_auth_token = True ) prompt = "A majestic in the style of TOK jumping from a big stone at night" image = pipe( prompt=prompt, num_inference_steps=50, cross_attention_kwargs={"scale": lora_scale} ).images[0] ```
Pplus/mistral-health-text-generation
Pplus
2024-04-01T12:10:15Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-01T12:09:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TheVisitorX/InfinityRP-v1-7B-4.0bpw-exl2
TheVisitorX
2024-04-01T12:04:39Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "exl2", "quantized", "roleplay", "4.0bpw", "merge", "nsfw", "en", "base_model:ChaoticNeutrals/Eris_Floramix_DPO_7B", "base_model:merge:ChaoticNeutrals/Eris_Floramix_DPO_7B", "base_model:ResplendentAI/Datura_7B", "base_model:merge:ResplendentAI/Datura_7B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "region:us" ]
text-generation
2024-04-01T11:26:24Z
--- library_name: transformers license: apache-2.0 language: - en tags: - exl2 - quantized - roleplay - 4.0bpw - mistral - merge - nsfw inference: false base_model: - ResplendentAI/Datura_7B - ChaoticNeutrals/Eris_Floramix_DPO_7B --- This repository hosts exl2 quantization for [Endevor/InfinityRP-v1-7B](https://huggingface.co/Endevor/InfinityRP-v1-7B). **Original model information:** ![waifu/jpeg](https://i.imgur.com/cslLqjd.jpeg) This is an experimental model I currently use. It's far from great as I'm still working on it, but I leave it here for people to try if interested in this format. This model was basically made to stop some upsetting hallucinations, so {{char}} mostly and occasionally will wait {{user}} response instead of responding itself or deciding for {{user}}, also, my primary idea was to create a cozy model that thinks.* Inspired by [lemonilia/Limamono-Mistral-7B-v0.50](https://huggingface.co/lemonilia/Limamono-Mistral-7B-v0.50) ### Style details: - Quotes are used for character dialogs. - `"Hey, Anon... What do you think about my style?"` - Asterisks can be used for narration, but it's optional, it's recommended to use default novel format. - `*Her cheeks blush slightly, she tries to hide.*` - Character thoughts are wrapped with ` marks. **This may often spontaneously occur.** - `My heart skips a beat hearing him call me pretty!` *If you want thoughts to appear more often, just add something like this to your system prompt: ```"{{char}} internal thoughts are wrapped with ` marks."```* - Accepted response lengths: ***tiny, short, medium, long, huge*** - For example: ### Response: (length = medium) Note: Apparently ***humongous***, ***extreme*** and ***unlimited*** may not work at moment. Not fully tested. ### Prompt format: Extended Alpaca, as always. ``"You are now in roleplay chat mode. Engage in an endless chat with {{user}}. Always wait {{user}} turn, next actions and responses."`` ## Example: ![example](https://files.catbox.moe/j0zxov.png)
vishnu027/dental_classification_model_010424_2
vishnu027
2024-04-01T11:58:06Z
15
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-04-01T11:37:56Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: dental_classification_model_010424_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dental_classification_model_010424_2 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5837 - Accuracy: 0.8142 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9273 | 0.99 | 41 | 1.9166 | 0.2281 | | 1.8096 | 2.0 | 83 | 1.7653 | 0.3716 | | 1.6373 | 2.99 | 124 | 1.5785 | 0.4486 | | 1.4996 | 4.0 | 166 | 1.4273 | 0.5060 | | 1.3441 | 4.99 | 207 | 1.2730 | 0.5891 | | 1.1677 | 6.0 | 249 | 1.1615 | 0.6254 | | 0.9809 | 6.99 | 290 | 1.1033 | 0.6254 | | 0.8292 | 8.0 | 332 | 0.9928 | 0.6873 | | 0.8035 | 8.99 | 373 | 0.8762 | 0.7402 | | 0.6982 | 10.0 | 415 | 0.8117 | 0.7341 | | 0.6992 | 10.99 | 456 | 0.7667 | 0.7749 | | 0.5601 | 12.0 | 498 | 0.7563 | 0.7568 | | 0.5358 | 12.99 | 539 | 0.7178 | 0.7749 | | 0.569 | 14.0 | 581 | 0.7356 | 0.7553 | | 0.4503 | 14.99 | 622 | 0.6535 | 0.8051 | | 0.4509 | 16.0 | 664 | 0.6755 | 0.7855 | | 0.5127 | 16.99 | 705 | 0.6431 | 0.7976 | | 0.425 | 18.0 | 747 | 0.6362 | 0.8006 | | 0.3968 | 18.99 | 788 | 0.5821 | 0.8157 | | 0.398 | 20.0 | 830 | 0.6355 | 0.7900 | | 0.4468 | 20.99 | 871 | 0.5103 | 0.8323 | | 0.429 | 22.0 | 913 | 0.6056 | 0.8051 | | 0.3332 | 22.99 | 954 | 0.5681 | 0.8233 | | 0.3431 | 24.0 | 996 | 0.5186 | 0.8263 | | 0.3052 | 24.99 | 1037 | 0.5993 | 0.8036 | | 0.3495 | 26.0 | 1079 | 0.5837 | 0.8142 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
KameronB/sitcc-roberta
KameronB
2024-04-01T11:48:45Z
34
0
transformers
[ "transformers", "pytorch", "roberta", "IT", "classification", "call center", "grammar", "en", "dataset:KameronB/SITCC-dataset", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-03-24T13:17:53Z
--- license: mit datasets: - KameronB/SITCC-dataset language: - en tags: - IT - classification - call center - grammar --- ### Synthetic IT Call Center Data Sentence Quality Predictor A RoBERTa-base model fine-tuned on a synthetic dataset of good and bad sentences that would be found in IT call center tickets. This model aims to predict the quality of sentences in the context of IT support communications, providing a numerical score from 0.0 to 1.0, where 0 represents a poor quality sentence, and 1.0 represents an ideal quality sentence. #### Model Background This model was created out of the necessity to objectively measure the quality of IT call center journaling and improve overall customer service. By leveraging OpenAI's GPT-4 to simulate both effective and ineffective call center agent responses, and then using GPT-4-turbo to rank these responses, we've synthesized a unique dataset that reflects a wide range of possible interactions in an IT support context. The dataset comprises 1,464 items, each scored and annotated with insights into what constitutes quality journaling vs. poor journaling. #### Approach The foundation of this model is the RoBERTa-base transformer, chosen for its robust performance in natural language understanding tasks. I extended and fine-tuned the last four layers of RoBERTa to specialize in our sentence quality prediction task. This fine-tuning process involved manual adjustments and iterative training sessions to refine the model's accuracy and reduce the Mean Squared Error (MSE) on the validation set. #### Performance After several rounds of training and manual tweaks, the model achieved a validation MSE of approximately 0.02. This metric indicates the model's ability to closely predict the quality scores assigned by the simulated call center manager, with a lower MSE reflecting higher accuracy in those predictions. #### Future Work The journey to perfecting this model is ongoing. Plans to improve its performance include: - Expanding the training dataset with more synthesized examples to cover a broader spectrum of potential customer interactions. - Experimenting with adjusting and fine-tuning additional layers of the RoBERTa model to see if that yields better predictive accuracy. - Exploring other evaluation metrics beyond MSE to ensure the model's predictions are as useful and actionable as possible in a real-world IT call center environment. #### How to Use This Model This model is designed for integration into IT call center software systems, where it can automatically score incoming and outgoing ticket responses for quality. To use this model: 1. Ensure you have the Hugging Face Transformers library installed in your Python environment. 2. Load the model using the following code snippet: ```python from __future__ import annotations from transformers import RobertaConfig, RobertaModel, RobertaTokenizer, AutoModel, AutoTokenizer import torch # Add a custom regression head to RoBERTa class SITCC(torch.nn.Module): def __init__(self, model, config): super(SITCC, self).__init__() self.roberta = model self.regressor = torch.nn.Linear(config.hidden_size, 1) # Outputs a single value def forward(self, input_ids, attention_mask): outputs = self.roberta(input_ids=input_ids, attention_mask=attention_mask) sequence_output = outputs[1] # The last hidden-state is the first element of the output tuple logits = self.regressor(sequence_output) return logits def init_model() -> SITCC: # Load the model from huggingface model_name = "KameronB/sitcc-roberta" tokenizer = AutoTokenizer.from_pretrained(model_name, from_tf=False) config = RobertaConfig.from_pretrained(model_name,) # create the model based on the RoBERTa base model model = SITCC(RobertaModel(config), config) # fetch the statedict to apply the fine-tuned weights state_dict = torch.hub.load_state_dict_from_url(f"https://huggingface.co/{model_name}/resolve/main/pytorch_model.bin") # if running on cpu # state_dict = torch.hub.load_state_dict_from_url(f"https://huggingface.co/{model_name}/resolve/main/pytorch_model.bin", map_location=torch.device('cpu')) model.load_state_dict(state_dict) return model, tokenizer model, tokenizer = init_model() def predict(sentences): model.eval() inputs = tokenizer(sentences, padding=True, truncation=True, max_length=512, return_tensors="pt") input_ids = inputs['input_ids'] attention_mask = inputs['attention_mask'] with torch.no_grad(): outputs = model(input_ids, attention_mask) return outputs ```
serhii-korobchenko/Helsinki-Shevchenko-2024-04-01-10-35-04
serhii-korobchenko
2024-04-01T11:47:38Z
59
0
transformers
[ "transformers", "tf", "marian", "text2text-generation", "generated_from_keras_callback", "base_model:Helsinki-NLP/opus-mt-ru-uk", "base_model:finetune:Helsinki-NLP/opus-mt-ru-uk", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-04-01T10:35:29Z
--- license: apache-2.0 base_model: Helsinki-NLP/opus-mt-ru-uk tags: - generated_from_keras_callback model-index: - name: serhii-korobchenko/Helsinki-Shevchenko-2024-04-01-10-35-04 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # serhii-korobchenko/Helsinki-Shevchenko-2024-04-01-10-35-04 This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ru-uk](https://huggingface.co/Helsinki-NLP/opus-mt-ru-uk) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.7141 - Validation Loss: 1.1762 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-06, 'decay_steps': 38835, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 1e-06} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.5491 | 1.2914 | 0 | | 1.1163 | 1.2121 | 1 | | 0.9106 | 1.1878 | 2 | | 0.7863 | 1.1770 | 3 | | 0.7141 | 1.1762 | 4 | ### Framework versions - Transformers 4.38.2 - TensorFlow 2.15.0 - Datasets 2.18.0 - Tokenizers 0.15.2
Hemg/deeepfake-audio-Recognition-ttoo
Hemg
2024-04-01T11:45:12Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:audiofolder", "base_model:facebook/wav2vec2-base", "base_model:finetune:facebook/wav2vec2-base", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2024-04-01T11:26:58Z
--- license: apache-2.0 base_model: facebook/wav2vec2-base tags: - generated_from_trainer datasets: - audiofolder metrics: - accuracy model-index: - name: deeepfake-audio-Recognition-ttoo results: - task: name: Audio Classification type: audio-classification dataset: name: audiofolder type: audiofolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9545454545454546 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deeepfake-audio-Recognition-ttoo This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the audiofolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2288 - Accuracy: 0.9545 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - num_epochs: 16 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6417 | 1.0 | 33 | 0.5774 | 0.7879 | | 0.4818 | 2.0 | 66 | 0.3792 | 0.8485 | | 0.2756 | 3.0 | 99 | 0.3066 | 0.8788 | | 0.3106 | 4.0 | 132 | 0.1951 | 0.9545 | | 0.2138 | 5.0 | 165 | 0.2078 | 0.9394 | | 0.0988 | 6.0 | 198 | 0.3227 | 0.9091 | | 0.1043 | 7.0 | 231 | 0.2893 | 0.9394 | | 0.0808 | 8.0 | 264 | 0.2177 | 0.9545 | | 0.1312 | 9.0 | 297 | 0.2846 | 0.9091 | | 0.0667 | 10.0 | 330 | 0.1955 | 0.9545 | | 0.0513 | 11.0 | 363 | 0.2553 | 0.9545 | | 0.0217 | 12.0 | 396 | 0.1708 | 0.9545 | | 0.0136 | 13.0 | 429 | 0.1641 | 0.9545 | | 0.0236 | 14.0 | 462 | 0.2203 | 0.9545 | | 0.0097 | 15.0 | 495 | 0.2253 | 0.9545 | | 0.003 | 16.0 | 528 | 0.2288 | 0.9545 | ### Framework versions - Transformers 4.39.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
michidoma/bert-politics-sa
michidoma
2024-04-01T11:37:42Z
107
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-04-01T11:37:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SJChaudhuri/beit-base-patch16-224-pt22k-finetuned-eurosat
SJChaudhuri
2024-04-01T11:35:46Z
192
0
transformers
[ "transformers", "pytorch", "tensorboard", "beit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-04-01T10:53:17Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: beit-base-patch16-224-pt22k-finetuned-eurosat results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # beit-base-patch16-224-pt22k-finetuned-eurosat This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9338 - Accuracy: 0.6667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 3 | 1.3657 | 0.3095 | | No log | 2.0 | 6 | 1.1966 | 0.4286 | | No log | 3.0 | 9 | 1.1076 | 0.4524 | | 1.2696 | 4.0 | 12 | 1.0717 | 0.5714 | | 1.2696 | 5.0 | 15 | 0.9948 | 0.5238 | | 1.2696 | 6.0 | 18 | 1.0701 | 0.5 | | 1.0945 | 7.0 | 21 | 0.9920 | 0.5 | | 1.0945 | 8.0 | 24 | 0.9338 | 0.6667 | | 1.0945 | 9.0 | 27 | 0.9605 | 0.5714 | | 0.9538 | 10.0 | 30 | 0.9285 | 0.6190 | | 0.9538 | 11.0 | 33 | 0.9113 | 0.5714 | | 0.9538 | 12.0 | 36 | 0.8414 | 0.6190 | | 0.9538 | 13.0 | 39 | 0.9422 | 0.5476 | | 0.8646 | 14.0 | 42 | 0.8165 | 0.6429 | | 0.8646 | 15.0 | 45 | 0.9582 | 0.5238 | | 0.8646 | 16.0 | 48 | 0.8548 | 0.6190 | | 0.8082 | 17.0 | 51 | 0.8568 | 0.6190 | | 0.8082 | 18.0 | 54 | 0.8792 | 0.5476 | | 0.8082 | 19.0 | 57 | 0.8819 | 0.5476 | | 0.7731 | 20.0 | 60 | 0.8454 | 0.5714 | ### Framework versions - Transformers 4.30.0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.13.3
areegtarek/idefics-9b-instruct-threesplitsthreeepochs-1
areegtarek
2024-04-01T11:34:25Z
64
0
transformers
[ "transformers", "safetensors", "idefics", "image-text-to-text", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
image-text-to-text
2024-04-01T04:50:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
denru/Midnight-Miqu-70B-v1.5x2-5_0bpw-h8-exl2-pippa
denru
2024-04-01T11:32:34Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:sophosympatheia/Midnight-Miqu-70B-v1.5", "base_model:quantized:sophosympatheia/Midnight-Miqu-70B-v1.5", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "5-bit", "exl2", "region:us" ]
text-generation
2024-04-01T11:24:42Z
--- base_model: - sophosympatheia/Midnight-Miqu-70B-v1.5 library_name: transformers tags: - mergekit - merge --- # merged_model This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [sophosympatheia/Midnight-Miqu-70B-v1.5](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.5) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: sophosympatheia/Midnight-Miqu-70B-v1.5 layer_range: [0, 16] - sources: - model: sophosympatheia/Midnight-Miqu-70B-v1.5 layer_range: [8, 24] - sources: - model: sophosympatheia/Midnight-Miqu-70B-v1.5 layer_range: [16, 32] - sources: - model: sophosympatheia/Midnight-Miqu-70B-v1.5 layer_range: [24, 40] - sources: - model: sophosympatheia/Midnight-Miqu-70B-v1.5 layer_range: [32, 48] - sources: - model: sophosympatheia/Midnight-Miqu-70B-v1.5 layer_range: [40, 56] - sources: - model: sophosympatheia/Midnight-Miqu-70B-v1.5 layer_range: [48, 64] - sources: - model: sophosympatheia/Midnight-Miqu-70B-v1.5 layer_range: [56, 72] - sources: - model: sophosympatheia/Midnight-Miqu-70B-v1.5 layer_range: [64, 80] merge_method: passthrough dtype: float16 ```
Gundra/reuters-gpt2-text-gen
Gundra
2024-04-01T11:21:40Z
127
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:Gundra/reuters-gpt2-text-gen", "base_model:finetune:Gundra/reuters-gpt2-text-gen", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-21T17:52:01Z
--- license: mit base_model: Gundra/reuters-gpt2-text-gen tags: - generated_from_trainer model-index: - name: reuters-gpt2-text-gen results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # reuters-gpt2-text-gen This model is a fine-tuned version of [Gundra/reuters-gpt2-text-gen](https://huggingface.co/Gundra/reuters-gpt2-text-gen) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.3841 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.1271 | 1.0 | 269 | 4.3841 | ### Framework versions - Transformers 4.39.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
al9sr31/KoSOLAR-10.7B-test-v0.2
al9sr31
2024-04-01T11:18:32Z
1
0
peft
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:TheBloke/SOLAR-10.7B-Instruct-v1.0-uncensored-GPTQ", "base_model:adapter:TheBloke/SOLAR-10.7B-Instruct-v1.0-uncensored-GPTQ", "license:apache-2.0", "4-bit", "gptq", "region:us" ]
null
2024-04-01T11:15:33Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: TheBloke/SOLAR-10.7B-Instruct-v1.0-uncensored-GPTQ model-index: - name: output_solor/exp_16 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: TheBloke/SOLAR-10.7B-Instruct-v1.0-uncensored-GPTQ is_llama_derived_model: false gptq: true gptq_disable_exllama: true model_type: AutoModelForCausalLM tokenizer_type: LlamaTokenizer tokenizer_use_fast: true tokenizer_legacy: true load_in_8bit: false load_in_4bit: false strict: false push_dataset_to_hub: hf_use_auth_token: true datasets: - path: datasets_cleansinng/datasets/helper_selector_1280_0305_v01.jsonl #Path to json dataset file in huggingface #for type,conversation arguments read axolotl readme and pick what is suited for your project, I wanted a chatbot and put sharegpt and chatml type: system_prompt: "Instruction에 따라 μ μ ˆν•˜κ²Œ Input 데이터λ₯Ό ν™œμš©ν•˜μ—¬ Output 닡변을 ν•˜μ„Έμš”. λ„ˆλŠ” μ‚¬μš©μž 질문(Instruction)에 μ‹€μ‹œκ°„μœΌλ‘œ API ν˜ΈμΆœμ„ μœ„ν•œ Json ν˜•μ‹μ˜ κ΅¬μ‘°ν™”λœ κ²°κ³Όλ₯Ό μƒμ„±ν•˜λŠ” 인곡지λŠ₯이야." format: "[INST]### Instruction:\n{instruction}\n\n### Input:{input}\n\n[/INST]### Output: " no_input_format: "[INST]### Instruction:\n{instruction}\n\n[/INST]### Output: " field_instruction: Instruction field_input: Input field_output: Output dataset_prepared_path: val_set_size: 0.05 adapter: lora lora_model_dir: sequence_len: 4096 sample_packing: lora_r: 32 lora_alpha: 32 lora_dropout: 0.05 lora_target_modules: - k_proj - o_proj - q_proj - v_proj lora_target_linear: lora_fan_in_fan_out: wandb_project: wandb_watch: wandb_name: wandb_log_model: output_dir: ./output_solor/exp_16 gradient_accumulation_steps: 8 micro_batch_size: 8 num_epochs: 5 optimizer: adamw_torch adam_beta2: 0.95 adam_eps: 0.00001 max_grad_norm: 1.0 torchdistx_path: lr_scheduler: cosine lr_quadratic_warmup: true learning_rate: 0.0005 train_on_inputs: false group_by_length: false bf16: false fp16: false float16: true tf32: true gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: sdp_attention: flash_optimum: warmup_steps: 100 evals_per_epoch: 4 saves_per_epoch: 1 debug: deepspeed: deepspeed_configs/zero1.json weight_decay: 0.1 special_tokens: bos_token: "<s>" eos_token: "</s>" unk_token: "<unk>" ``` </details><br> # output_solor/exp_16 This model is a fine-tuned version of [TheBloke/SOLAR-10.7B-Instruct-v1.0-uncensored-GPTQ](https://huggingface.co/TheBloke/SOLAR-10.7B-Instruct-v1.0-uncensored-GPTQ) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2015 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.3493 | 0.05 | 1 | 1.2795 | | 1.2483 | 0.26 | 5 | 1.2769 | | 1.2275 | 0.53 | 10 | 1.2099 | | 1.0529 | 0.79 | 15 | 1.0724 | | 0.8642 | 1.05 | 20 | 0.9709 | | 0.8477 | 1.32 | 25 | 0.8245 | | 0.7207 | 1.58 | 30 | 0.6994 | | 0.4656 | 1.84 | 35 | 0.5878 | | 0.4949 | 2.11 | 40 | 0.4970 | | 0.3497 | 2.37 | 45 | 0.4221 | | 0.3288 | 2.63 | 50 | 0.3672 | | 0.3011 | 2.89 | 55 | 0.3250 | | 0.2648 | 3.16 | 60 | 0.2900 | | 0.3084 | 3.42 | 65 | 0.2591 | | 0.2696 | 3.68 | 70 | 0.2459 | | 0.2197 | 3.95 | 75 | 0.2286 | | 0.1905 | 4.21 | 80 | 0.2111 | | 0.1815 | 4.47 | 85 | 0.2084 | | 0.2164 | 4.74 | 90 | 0.2128 | | 0.1412 | 5.0 | 95 | 0.2015 | ### Framework versions - PEFT 0.9.1.dev0 - Transformers 4.38.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.18.0 - Tokenizers 0.15.0
ADG-2353/a2c-PandaReachDense-v3
ADG-2353
2024-04-01T11:13:46Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-04-01T11:09:25Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.20 +/- 0.09 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
KimByeongSu/gpt-neo-125m-cs-finetuning-70000-3
KimByeongSu
2024-04-01T11:11:38Z
104
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt_neo", "text-generation", "generated_from_trainer", "base_model:EleutherAI/gpt-neo-125m", "base_model:finetune:EleutherAI/gpt-neo-125m", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-04-01T10:51:24Z
--- license: mit base_model: EleutherAI/gpt-neo-125m tags: - generated_from_trainer model-index: - name: gpt-neo-125m-cs-finetuning-70000-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt-neo-125m-cs-finetuning-70000-3 This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2050 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.4238 | 1.0 | 894 | 3.2737 | | 3.1502 | 2.0 | 1788 | 3.2186 | | 3.0562 | 3.0 | 2682 | 3.2050 | ### Framework versions - Transformers 4.36.2 - Pytorch 1.13.1+cu117 - Datasets 2.14.6 - Tokenizers 0.15.0
LnL-AI/dbrx-base-converted-v2-4bit-gptq-marlin
LnL-AI
2024-04-01T11:03:14Z
3
0
transformers
[ "transformers", "dbrx", "text-generation", "custom_code", "arxiv:2211.15841", "arxiv:2304.11277", "autotrain_compatible", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-03-31T14:13:54Z
### DEPRECATED Please use better calibrated version at https://huggingface.co/LnL-AI/dbrx-base-converted-v2-4bit-gptq-marlin-v2 ## Use At your own Risk! 4bit gptq (marlin) version of dbrx-base-converted-v2 Run: 1. Use PR https://github.com/AutoGPTQ/AutoGPTQ/pull/625 2. Need ~68GB of VRAM (1xA100 80G will do) 3. Use combine_tensors.sh script to combine the two split files into one. HF has max 50GB file size limit. ```json { "bits": 4, "group_size": 128, "damp_percent": 0.005, "desc_act": false, "static_groups": false, "sym": true, "true_sequential": true, "model_name_or_path": null, "model_file_base_name": null, "quant_method": "gptq", "checkpoint_format": "marlin" } ``` TODO: * Add sharding of quantized model so there is no need to manually combine safetensor files --- inference: false license: other license_name: databricks-open-model-license license_link: https://www.databricks.com/legal/open-model-license --- # DBRX Base * DBRX Base is a mixture-of-experts (MoE) large language model trained from scratch by Databricks. * We are releasing both DBRX Base, a pretrained base model, and DBRX Instruct, a fine-tuned version for few-turn interactions, under [an open license](https://www.databricks.com/legal/open-model-license). * This is the repository for DBRX Base. DBRX Instruct can be found [here](https://huggingface.co/databricks/dbrx-instruct). * For full details on the DBRX models, please read our [technical blog post](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm). ## Model Overview DBRX is a [transformer-based](https://www.isattentionallyouneed.com/) decoder-only large language model (LLM) that was trained using next-token prediction. It uses a *fine-grained* mixture-of-experts (MoE) architecture with 132B total parameters of which 36B parameters are active on any input. It was pre-trained on 12T tokens of text and code data. Compared to other open MoE models like Mixtral-8x7B and Grok-1, DBRX is fine-grained, meaning it uses a larger number of smaller experts. DBRX has 16 experts and chooses 4, while Mixtral-8x7B and Grok-1 have 8 experts and choose 2. This provides 65x more possible combinations of experts and we found that this improves model quality. DBRX uses rotary position encodings (RoPE), gated linear units (GLU), and grouped query attention (GQA). It uses the GPT-4 tokenizer as provided in the [tiktoken](https://github.com/openai/tiktoken) repository. We made these choices based on exhaustive evaluation and scaling experiments. DBRX was pretrained on 12T tokens of carefully curated data and a maximum context length of 32K tokens. We estimate that this data is at least 2x better token-for-token than the data we used to pretrain the MPT family of models. This new dataset was developed using the full suite of Databricks tools, including Apache Sparkβ„’ and Databricks notebooks for data processing, and Unity Catalog for data management and governance. We used curriculum learning for pretraining, changing the data mix during training in ways we found to substantially improve model quality. * **Inputs:** DBRX only accepts text-based inputs and accepts a context length of up to 32768 tokens. * **Outputs:** DBRX only produces text-based outputs. * **Model Architecture:** More detailed information about DBRX Instruct and DBRX Base can be found in our [technical blog post](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm). * **License:** [Databricks Open Model License](https://www.databricks.com/legal/open-model-license) * **Acceptable Use Policy:** [Databricks Open Model Acceptable Use Policy](https://www.databricks.com/legal/acceptable-use-policy-open-model) * **Version:** 1.0 * **Owner:** Databricks, Inc. ## Usage These are several general ways to use the DBRX models: * DBRX Base and DBRX Instruct are available for download on HuggingFace (see our Quickstart guide below). This is the HF repository for DBRX Base; DBRX Instruct can be found [here](https://huggingface.co/databricks/dbrx-instruct). * The DBRX model repository can be found on GitHub [here](https://github.com/databricks/dbrx). * DBRX Base and DBRX Instruct are available with [Databricks Foundation Model APIs](https://docs.databricks.com/en/machine-learning/foundation-models/index.html) via both *Pay-per-token* and *Provisioned Throughput* endpoints. These are enterprise-ready deployments. * For more information on how to fine-tune using LLM-Foundry, please take a look at our LLM pretraining and fine-tuning [documentation](https://github.com/mosaicml/llm-foundry/blob/main/scripts/train/README.md). ## Quickstart Guide **NOTE: This is DBRX Base, and has not been instruction finetuned. It has not been trained for interactive chat and is only a completion model.** If you are looking for the finetuned model, please use [DBRX Instruct](https://huggingface.co/databricks/dbrx-instruct). Getting started with DBRX models is easy with the `transformers` library. The model requires ~264GB of RAM and the following packages: ```bash pip install "transformers>=4.39.2" "tiktoken>=0.6.0" ``` If you'd like to speed up download time, you can use the `hf_transfer` package as described by Huggingface [here](https://huggingface.co/docs/huggingface_hub/en/guides/download#faster-downloads). ```bash pip install hf_transfer export HF_HUB_ENABLE_HF_TRANSFER=1 ``` You will need to request access to this repository to download the model. Once this is granted, [obtain an access token](https://huggingface.co/docs/hub/en/security-tokens) with `read` permission, and supply the token below. ### Run the model on a CPU: ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("databricks/dbrx-base", trust_remote_code=True, token="hf_YOUR_TOKEN") model = AutoModelForCausalLM.from_pretrained("databricks/dbrx-base", device_map="cpu", torch_dtype=torch.bfloat16, trust_remote_code=True, token="hf_YOUR_TOKEN") input_text = "Databricks was founded in " input_ids = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**input_ids, max_new_tokens=100) print(tokenizer.decode(outputs[0])) ``` ### Run the model on multiple GPUs: ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("databricks/dbrx-base", trust_remote_code=True, token="hf_YOUR_TOKEN") model = AutoModelForCausalLM.from_pretrained("databricks/dbrx-base", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True, token="hf_YOUR_TOKEN") input_text = "Databricks was founded in " input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=100) print(tokenizer.decode(outputs[0])) ``` If your GPU system supports [FlashAttention2](https://huggingface.co/docs/transformers/perf_infer_gpu_one#flashattention-2), you can add `attn_implementation=”flash_attention_2”` as a keyword to `AutoModelForCausalLM.from_pretrained()` to achieve faster inference. ## Limitations and Ethical Considerations ### Training Dataset Limitations The DBRX models were trained on 12T tokens of text, with a knowledge cutoff date of December 2023. The training mix used for DBRX contains both natural-language and code examples. The vast majority of our training data is in the English language. We did not test DBRX for non-English proficiency. Therefore, DBRX should be considered a generalist model for text-based use in the English language. DBRX does not have multimodal capabilities. ### Associated Risks and Recommendations All foundation models are novel technologies that carry various risks, and may output information that is inaccurate, incomplete, biased, or offensive. Users should exercise judgment and evaluate such output for accuracy and appropriateness for their desired use case before using or sharing it. Databricks recommends [using retrieval augmented generation (RAG)](https://www.databricks.com/glossary/retrieval-augmented-generation-rag) in scenarios where accuracy and fidelity are important. We also recommend that anyone using or fine-tuning either DBRX Base or DBRX Instruct perform additional testing around safety in the context of their particular application and domain. ## Intended Uses ### Intended Use Cases The DBRX models are open, general-purpose LLMs intended and licensed for both commercial and research applications. They can be further fine-tuned for various domain-specific natural language and coding tasks. DBRX Base can be used as an off-the-shelf model for text completion for general English-language and coding tasks. Please review the Associated Risks section above, as well as the [Databricks Open Model License](https://www.databricks.com/legal/open-model-license) and [Databricks Open Model Acceptable Use Policy](https://www.databricks.com/legal/acceptable-use-policy-open-model) for further information about permissible uses of DBRX Base and its derivatives. ### Out-of-Scope Use Cases DBRX models are not intended to be used out-of-the-box in non-English languages and do not support native code execution, or other forms of function-calling. DBRX models should not be used in any manner that violates applicable laws or regulations or in any other way that is prohibited by the [Databricks Open Model License](https://www.databricks.com/legal/open-model-license) and [Databricks Open Model Acceptable Use Policy](https://www.databricks.com/legal/acceptable-use-policy-open-model). ## Training Stack MoE models are complicated to train, and the training of DBRX Base and DBRX Instruct was heavily supported by Databricks’ infrastructure for data processing and large-scale LLM training (e.g., [Composer](https://github.com/mosaicml/composer), [Streaming](https://github.com/mosaicml/streaming), [Megablocks](https://github.com/stanford-futuredata/megablocks), and [LLM Foundry](https://github.com/mosaicml/llm-foundry)). Composer is our core library for large-scale training. It provides an optimized training loop, easy [checkpointing](https://docs.mosaicml.com/projects/composer/en/latest/trainer/checkpointing.html) and [logging](https://docs.mosaicml.com/projects/composer/en/latest/trainer/logging.html#wood-logging), [FSDP](https://pytorch.org/docs/stable/fsdp.html)-based [model sharding](https://docs.mosaicml.com/projects/composer/en/latest/notes/distributed_training.html#fullyshardeddataparallel-fsdp), convenient [abstractions](https://docs.mosaicml.com/projects/composer/en/latest/trainer/time.html), extreme customizability via [callbacks](https://docs.mosaicml.com/projects/composer/en/latest/trainer/callbacks.html), and more. Streaming enables fast, low cost, and scalable training on large datasets from cloud storage. It handles a variety of challenges around deterministic resumption as node counts change, avoiding redundant downloads across devices, high-quality shuffling at scale, sample-level random access, and speed. Megablocks is a lightweight library for MoE training. Crucially, it supports β€œdropless MoE,” which avoids inefficient padding and is intended to provide deterministic outputs for a given sequence no matter what other sequences are in the batch. LLM Foundry ties all of these libraries together to create a simple LLM pretraining, fine-tuning, and inference experience. DBRX was trained using proprietary optimized versions of the above open source libraries, along with our [LLM training platform](https://www.databricks.com/product/machine-learning/mosaic-ai-training). ## Evaluation We find that DBRX outperforms established open-source and open-weight base models on the [Databricks Model Gauntlet](https://www.databricks.com/blog/llm-evaluation-for-icl), the [Hugging Face Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), and HumanEval. The Databricks Model Gauntlet measures performance on more than 30 tasks across six categories: world knowledge, common sense reasoning, language understanding, reading comprehension, symbolic problem solving, and programming. The Hugging Face Open LLM Leaderboard measures the average of ARC-Challenge, HellaSwag, MMLU, TruthfulQA, Winogrande and GSM8k. HumanEval measures coding ability. Full evaluation details can be found in our [technical blog post](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm). ## Acknowledgements The DBRX models were made possible thanks in large part to the open-source community, especially: * The [MegaBlocks](https://arxiv.org/abs/2211.15841) library, which established a foundation for our MoE implementation. * [PyTorch FSDP](https://arxiv.org/abs/2304.11277), which we built on for distributed training.
leimu/1
leimu
2024-04-01T10:55:08Z
3
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:ByteDance/SDXL-Lightning", "base_model:adapter:ByteDance/SDXL-Lightning", "license:mit", "region:us" ]
text-to-image
2024-04-01T10:06:18Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: '11' output: url: >- images/202900-2505043907-detailed face,hires,detailed eyes,detailed hands,best hands,anime screencap,_ _lora_AnnaNishikinomiya_16XL_1_ shimoneta, anna_ni.jpeg base_model: ByteDance/SDXL-Lightning instance_prompt: '12' license: mit --- # Anna <Gallery /> ## Trigger words You should use `12` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/leimu/1/tree/main) them in the Files & versions tab.
Yorick/sum_cap_dog
Yorick
2024-04-01T10:54:46Z
11
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-04-01T09:30:27Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - Yorick/sum_cap_dog These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
subhamiiita/bart_recommendation_sports_equipment_english
subhamiiita
2024-04-01T10:54:29Z
83
0
transformers
[ "transformers", "tensorboard", "safetensors", "pegasus", "text2text-generation", "generated_from_trainer", "base_model:google/pegasus-xsum", "base_model:finetune:google/pegasus-xsum", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-04-01T10:53:12Z
--- base_model: google/pegasus-xsum tags: - generated_from_trainer metrics: - rouge model-index: - name: bart_recommendation_sports_equipment_english results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart_recommendation_sports_equipment_english This model is a fine-tuned version of [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.9511 - Rouge1: 29.2063 - Rouge2: 9.5238 - Rougel: 28.8889 - Rougelsum: 29.3651 - Gen Len: 3.4762 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 0.96 | 12 | 5.1428 | 25.7937 | 4.7619 | 26.1905 | 26.1905 | 3.3810 | | No log | 2.0 | 25 | 4.5991 | 28.4127 | 4.7619 | 28.4127 | 28.4127 | 4.2857 | | No log | 2.96 | 37 | 4.3659 | 30.0000 | 4.7619 | 30.0 | 30.2381 | 3.9048 | | No log | 4.0 | 50 | 4.2992 | 23.2540 | 4.7619 | 23.1746 | 23.3333 | 3.9524 | | No log | 4.96 | 62 | 4.1730 | 23.2540 | 4.7619 | 23.1746 | 23.3333 | 3.5714 | | No log | 6.0 | 75 | 4.0884 | 29.2063 | 14.2857 | 29.2063 | 29.2063 | 3.4762 | | No log | 6.96 | 87 | 4.0252 | 25.0 | 4.7619 | 24.9206 | 25.2381 | 3.4762 | | No log | 8.0 | 100 | 4.0019 | 31.5873 | 14.2857 | 31.1905 | 31.4286 | 3.5714 | | No log | 8.96 | 112 | 3.9648 | 24.2857 | 4.7619 | 24.2857 | 24.7619 | 3.4762 | | No log | 9.6 | 120 | 3.9511 | 29.2063 | 9.5238 | 28.8889 | 29.3651 | 3.4762 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
yuchiahung/donut-base-invoice-resize_splitbydate_roc_240401
yuchiahung
2024-04-01T10:53:26Z
49
0
transformers
[ "transformers", "tensorboard", "safetensors", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "base_model:naver-clova-ix/donut-base", "base_model:finetune:naver-clova-ix/donut-base", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-04-01T02:59:52Z
--- license: mit base_model: naver-clova-ix/donut-base tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut-base-invoice-resize_splitbydate_roc_240401 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-invoice-resize_splitbydate_roc_240401 This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.14.5 - Tokenizers 0.15.2
gurpreetzenscale/bart-cnn-aps-fineTuned
gurpreetzenscale
2024-04-01T10:51:45Z
104
0
transformers
[ "transformers", "tensorboard", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-large-cnn", "base_model:finetune:facebook/bart-large-cnn", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-03-29T11:34:17Z
--- license: mit base_model: facebook/bart-large-cnn tags: - generated_from_trainer model-index: - name: bart-cnn-aps-fineTuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-cnn-aps-fineTuned This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0208 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 6 | 0.1899 | | 1.3242 | 2.0 | 12 | 0.0825 | | 1.3242 | 3.0 | 18 | 0.0546 | | 0.069 | 4.0 | 24 | 0.0347 | | 0.0352 | 5.0 | 30 | 0.0277 | | 0.0352 | 6.0 | 36 | 0.0242 | | 0.0253 | 7.0 | 42 | 0.0217 | | 0.0253 | 8.0 | 48 | 0.0210 | | 0.0216 | 9.0 | 54 | 0.0208 | | 0.0201 | 10.0 | 60 | 0.0208 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
eunyounglee/yi-6b-200k-gptq-ft-1
eunyounglee
2024-04-01T10:49:34Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-01T10:49:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rahulrajrr/llama2_finetuned_chatbot
rahulrajrr
2024-04-01T10:48:11Z
0
0
null
[ "tensorboard", "generated_from_trainer", "region:us" ]
null
2024-04-01T10:45:41Z
--- tags: - generated_from_trainer model-index: - name: llama2_finetuned_chatbot results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama2_finetuned_chatbot This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 5 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.13.3
SidXXD/test4crossattentionmap-mist-0
SidXXD
2024-04-01T10:40:37Z
0
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "custom-diffusion", "base_model:CompVis/stable-diffusion-v1-4", "base_model:adapter:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-04-01T10:36:40Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: photo of a <new1> dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - custom-diffusion inference: true --- # Custom Diffusion - SidXXD/test4crossattentionmap-mist-0 These are Custom Diffusion adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on photo of a <new1> dog using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following. For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
anand-s/mistral-biomedical-entity-extraction
anand-s
2024-04-01T10:34:39Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2", "region:us" ]
null
2024-04-01T10:30:38Z
--- library_name: peft base_model: mistralai/Mistral-7B-Instruct-v0.2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.9.0
Terryue/my_awesome_food_model
Terryue
2024-04-01T10:32:56Z
194
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:food101", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-04-01T09:54:01Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - food101 metrics: - accuracy model-index: - name: my_awesome_food_model results: - task: name: Image Classification type: image-classification dataset: name: food101 type: food101 config: default split: train[:5000] args: default metrics: - name: Accuracy type: accuracy value: 0.895 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset. It achieves the following results on the evaluation set: - Loss: 1.6473 - Accuracy: 0.895 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7119 | 0.99 | 62 | 2.5471 | 0.823 | | 1.8543 | 2.0 | 125 | 1.8205 | 0.88 | | 1.6069 | 2.98 | 186 | 1.6473 | 0.895 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.2 - Datasets 2.1.0 - Tokenizers 0.15.2
julycodes/gemma-assessment-plan-finetune-test
julycodes
2024-04-01T10:27:55Z
4
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-01T10:21:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Akhil9514/Fine_Tune_T5_Model_News_Summarization
Akhil9514
2024-04-01T10:27:08Z
1
0
transformers
[ "transformers", "tf", "t5", "text2text-generation", "generated_from_keras_callback", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-04-01T02:26:50Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_keras_callback model-index: - name: Akhil9514/Fine_Tune_T5_Model_News_Summarization results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Akhil9514/Fine_Tune_T5_Model_News_Summarization This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.6688 - Validation Loss: 1.4383 - Train Lr: 2e-05 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Lr | Epoch | |:----------:|:---------------:|:--------:|:-----:| | 1.6688 | 1.4383 | 2e-05 | 0 | ### Framework versions - Transformers 4.38.2 - TensorFlow 2.15.0 - Datasets 2.1.0 - Tokenizers 0.15.2
OctavianB/RoBERTa-openAI-detector-fintuned
OctavianB
2024-04-01T10:21:37Z
180
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-04-01T10:20:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MM2157/AraBERT_token_classification__AraEval24_merged_rassd_aratweets
MM2157
2024-04-01T10:19:37Z
105
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-03-31T14:18:26Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: AraBERT_token_classification__AraEval24_merged_rassd_aratweets results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # AraBERT_token_classification__AraEval24_merged_rassd_aratweets This model is a fine-tuned version of [aubmindlab/bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8783 - Precision: 0.0736 - Recall: 0.0243 - F1: 0.0365 - Accuracy: 0.8564 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.7095 | 1.0 | 3105 | 0.8134 | 1.0 | 0.0001 | 0.0002 | 0.8633 | | 0.6521 | 2.0 | 6210 | 0.7728 | 0.1149 | 0.0021 | 0.0041 | 0.8631 | | 0.5857 | 3.0 | 9315 | 0.7770 | 0.0383 | 0.0009 | 0.0017 | 0.8632 | | 0.5233 | 4.0 | 12420 | 0.7929 | 0.0896 | 0.0100 | 0.0180 | 0.8624 | | 0.5096 | 5.0 | 15525 | 0.7911 | 0.0716 | 0.0108 | 0.0187 | 0.8617 | | 0.4685 | 6.0 | 18630 | 0.8200 | 0.0906 | 0.0144 | 0.0248 | 0.8618 | | 0.4393 | 7.0 | 21735 | 0.8399 | 0.0939 | 0.0160 | 0.0273 | 0.8618 | | 0.4204 | 8.0 | 24840 | 0.8361 | 0.0862 | 0.0230 | 0.0363 | 0.8590 | | 0.3872 | 9.0 | 27945 | 0.8706 | 0.0782 | 0.0251 | 0.0380 | 0.8567 | | 0.3569 | 10.0 | 31050 | 0.8783 | 0.0736 | 0.0243 | 0.0365 | 0.8564 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.12.1 - Datasets 2.13.2 - Tokenizers 0.13.3
guptasaurabh78/mistral_b_finance_sg5
guptasaurabh78
2024-04-01T10:13:40Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-01T10:13:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
prerna0312/whisper-small-hi
prerna0312
2024-04-01T10:12:35Z
78
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-04-01T09:42:15Z
--- license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer model-index: - name: whisper-small-hi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-hi This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 100 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.2.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.2
JinbiaoZhu/finetuned-Git-base-pokemon-blip-captions-ImageCaptioning
JinbiaoZhu
2024-04-01T10:09:27Z
63
0
transformers
[ "transformers", "safetensors", "git", "image-text-to-text", "generated_from_trainer", "base_model:microsoft/git-base", "base_model:finetune:microsoft/git-base", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-04-01T08:38:40Z
--- license: mit base_model: microsoft/git-base tags: - generated_from_trainer model-index: - name: finetuned-Git-base-pokemon-blip-captions-ImageCaptioning results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-Git-base-pokemon-blip-captions-ImageCaptioning This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 12 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
baek26/wiki_asp-educational_institution_7476_bart-base
baek26
2024-04-01T10:09:26Z
107
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-base", "base_model:finetune:facebook/bart-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-04-01T07:48:45Z
--- license: apache-2.0 base_model: facebook/bart-base tags: - generated_from_trainer metrics: - rouge model-index: - name: wiki_asp-educational_institution_7476_bart-base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wiki_asp-educational_institution_7476_bart-base This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.6483 - Rouge1: 0.1554 - Rouge2: 0.0598 - Rougel: 0.1308 - Rougelsum: 0.1308 - Gen Len: 19.1331 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.81 | 500 | 2.8696 | 0.1509 | 0.054 | 0.1261 | 0.126 | 19.2999 | | No log | 3.63 | 1000 | 2.7162 | 0.1551 | 0.0574 | 0.1292 | 0.1293 | 19.1107 | | No log | 5.44 | 1500 | 2.6720 | 0.154 | 0.0588 | 0.1297 | 0.1295 | 19.071 | | 2.9542 | 7.26 | 2000 | 2.6582 | 0.1564 | 0.0602 | 0.1312 | 0.1312 | 19.1116 | | 2.9542 | 9.07 | 2500 | 2.6483 | 0.1554 | 0.0598 | 0.1308 | 0.1308 | 19.1331 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.0.0+cu117 - Datasets 2.18.0 - Tokenizers 0.15.2
ankhamun/x0_xxco0x
ankhamun
2024-04-01T10:08:43Z
123
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-01T09:59:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gitpullpull/artpull_diffusion_cascade
gitpullpull
2024-04-01T10:08:10Z
0
1
null
[ "text-to-image", "license:other", "region:us" ]
text-to-image
2024-04-01T09:26:52Z
--- pipeline_tag: text-to-image license: other license_name: stable-cascade-nc-community license_link: LICENSE --- # Art_pull_diffusion_cascade This model is a fine-tuned of sotediffusion-cascade-decoder_pre-alpha0. This model was trained on seven A5000 units during the LOCAL LLM Hackathon. https://huggingface.co/Disty0/sotediffusion-cascade-decoder_pre-alpha0
Rahul0098/model
Rahul0098
2024-04-01T10:07:06Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:Falconsai/text_summarization", "base_model:adapter:Falconsai/text_summarization", "license:apache-2.0", "region:us" ]
null
2024-04-01T07:50:01Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer metrics: - rouge base_model: Falconsai/text_summarization model-index: - name: model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model This model is a fine-tuned version of [Falconsai/text_summarization](https://huggingface.co/Falconsai/text_summarization) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.2990 - Rouge1: 0.1186 - Rouge2: 0.0198 - Rougel: 0.094 - Rougelsum: 0.094 - Gen Len: 19.9958 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 3.896 | 1.0 | 600 | 3.3871 | 0.1105 | 0.0171 | 0.0874 | 0.0874 | 20.0 | | 3.6922 | 2.0 | 1200 | 3.3257 | 0.116 | 0.0196 | 0.0921 | 0.0921 | 20.0 | | 3.6451 | 3.0 | 1800 | 3.3037 | 0.1189 | 0.0203 | 0.0947 | 0.0947 | 19.9972 | | 3.6179 | 4.0 | 2400 | 3.2990 | 0.1186 | 0.0198 | 0.094 | 0.094 | 19.9958 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
MaziyarPanahi/AlloyingotneoyInex12-7B-GGUF
MaziyarPanahi
2024-04-01T10:01:29Z
36
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "base_model:MSL7/INEX12-7b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:automerger/AlloyingotneoyInex12-7B", "base_model:quantized:automerger/AlloyingotneoyInex12-7B" ]
text-generation
2024-04-01T09:38:38Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - merge - mergekit - lazymergekit - automerger - base_model:MSL7/INEX12-7b - license:apache-2.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: AlloyingotneoyInex12-7B-GGUF base_model: automerger/AlloyingotneoyInex12-7B inference: false model_creator: automerger pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/AlloyingotneoyInex12-7B-GGUF](https://huggingface.co/MaziyarPanahi/AlloyingotneoyInex12-7B-GGUF) - Model creator: [automerger](https://huggingface.co/automerger) - Original model: [automerger/AlloyingotneoyInex12-7B](https://huggingface.co/automerger/AlloyingotneoyInex12-7B) ## Description [MaziyarPanahi/AlloyingotneoyInex12-7B-GGUF](https://huggingface.co/MaziyarPanahi/AlloyingotneoyInex12-7B-GGUF) contains GGUF format model files for [automerger/AlloyingotneoyInex12-7B](https://huggingface.co/automerger/AlloyingotneoyInex12-7B). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/AlloyingotneoyInex12-7B-GGUF](https://huggingface.co/MaziyarPanahi/AlloyingotneoyInex12-7B-GGUF) and below it, a specific filename to download, such as: AlloyingotneoyInex12-7B-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/AlloyingotneoyInex12-7B-GGUF AlloyingotneoyInex12-7B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/AlloyingotneoyInex12-7B-GGUF](https://huggingface.co/MaziyarPanahi/AlloyingotneoyInex12-7B-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/AlloyingotneoyInex12-7B-GGUF AlloyingotneoyInex12-7B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m AlloyingotneoyInex12-7B.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://github.com/abetlen/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./AlloyingotneoyInex12-7B.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./AlloyingotneoyInex12-7B.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
KorkmazKerem/gemma-2b-saban-v1
KorkmazKerem
2024-04-01T09:56:26Z
179
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-01T09:35:56Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gilmark123/finetuning-sentiment-model-3000-samples
gilmark123
2024-04-01T09:54:48Z
118
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-04-01T09:23:25Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: finetuning-sentiment-model-3000-samples results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
woojinheo/k-commerce-classification
woojinheo
2024-04-01T09:47:17Z
176
0
transformers
[ "transformers", "safetensors", "bart", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-04-01T09:46:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
siddheshtv/BlockNet10
siddheshtv
2024-04-01T09:43:24Z
4
0
transformers
[ "transformers", "custom_model", "custom", "cifar-10", "image-classification", "block-architecture", "en", "dataset:CIFAR-10", "endpoints_compatible", "region:us" ]
image-classification
2024-04-01T09:01:40Z
--- tags: - custom - cifar-10 - image-classification - block-architecture language: en framework: pytorch metrics: - accuracy: 75.43 license_name: mit datasets: - CIFAR-10 --- # BlockNet10 - CNN for CIFAR-10 dataset ## Overview BlockNet10 is a neural network architecture designed for image classification tasks using the CIFAR-10 dataset. This model implements a sequence of intermediate blocks (B1, B2, ..., BK) followed by an output block (O). ## Architecture Details ### Intermediate Block (Bi) Each intermediate block receives an input image x and outputs an image x'. The block comprises L independent convolutional layers, denoted as C1, C2, ..., CL. Each convolutional layer Cl in a block operates on the input image x and outputs an image Cl(x). <div style="display: flex; justify-content: center;"> <img src="figures/eq1.png" alt="Equation 1" /> </div> The output image x' is computed as x' = a1C1(x) + a2C2(x) + ... + aLCL(x), where a = [a1, a2, ..., aL]T is a vector computed by the block. The vector a is obtained by computing the average value of each channel of x and passing it through a fully connected layer with the same number of units as convolutional layers in the block. <div style="display: flex; justify-content: center;"> <img src="figures/fig1.png" alt="Figure 1" /> </div> ### Output Block (O) The output block processes the final output image from the intermediate blocks for classification. ## Analytics <div style="display: flex; justify-content: center; align-items: center;"> <table> <tr> <th>Epoch Number</th> <th>Train Accuracy</th> <th>Test Accuracy</th> <th>Average Loss</th> </tr> <tr> <td>50</td> <td>75.43</td> <td>80.56</td> <td>0.685</td> </tr> </table> </div> ## Clone on GitHub You can contribute to the advancement of this architecture, changes in hyperparameter, or solve issues <a href="https://github.com/siddheshtv/cifar10" target="_blank">here</a>. ## Citation If you use BlockNet10 in your research or work, please cite it as follows: ```bibtex @article{blocknet10, title={BlockNet10: CIFAR-10 Image Classifier}, author={Siddhesh Kulthe}, year={2024}, publisher={Hugging Face}, url={https://huggingface.co/siddheshtv/BlockNet10} } ``` --- ## license: mit
SidXXD/test4crossattentionmap-all
SidXXD
2024-04-01T09:39:31Z
0
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "custom-diffusion", "base_model:CompVis/stable-diffusion-v1-4", "base_model:adapter:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-04-01T09:34:51Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: photo of a <new1> dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - custom-diffusion inference: true --- # Custom Diffusion - SidXXD/test4crossattentionmap-all These are Custom Diffusion adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on photo of a <new1> dog using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following. For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
Sumail/Goat_Derrick20
Sumail
2024-04-01T09:38:37Z
90
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "mergekit", "merge", "conversational", "arxiv:2306.01708", "base_model:coffiee/s25", "base_model:merge:coffiee/s25", "base_model:coffiee/s26", "base_model:merge:coffiee/s26", "base_model:lxsure/Sniper_21", "base_model:merge:lxsure/Sniper_21", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-04-01T09:36:59Z
--- base_model: - coffiee/s25 - lxsure/Sniper_21 - coffiee/s26 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [coffiee/s25](https://huggingface.co/coffiee/s25) as a base. ### Models Merged The following models were included in the merge: * [lxsure/Sniper_21](https://huggingface.co/lxsure/Sniper_21) * [coffiee/s26](https://huggingface.co/coffiee/s26) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: coffiee/s25 # no parameters necessary for base model - model: coffiee/s26 parameters: density: 0.5 weight: 0.3 - model: lxsure/Sniper_21 parameters: density: 0.5 weight: 0.5 merge_method: ties base_model: coffiee/s25 parameters: normalize: true dtype: bfloat16 ```
HachiML/Llama2-jp-127M-2
HachiML
2024-04-01T09:33:53Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-01T07:15:35Z
--- tags: - generated_from_trainer model-index: - name: Llama2-jp-127M-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama2-jp-127M-2 This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5366 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0024 - train_batch_size: 192 - eval_batch_size: 192 - seed: 42 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 4.5846 | 0.05 | 1000 | 3.5530 | | 3.3978 | 0.1 | 2000 | 3.3095 | | 3.2474 | 0.15 | 3000 | 3.1697 | | 3.1079 | 0.2 | 4000 | 3.0595 | | 3.0201 | 0.25 | 5000 | 2.9864 | | 2.9566 | 0.29 | 6000 | 2.9366 | | 2.9115 | 0.34 | 7000 | 2.8954 | | 2.8732 | 0.39 | 8000 | 2.8627 | | 2.8423 | 0.44 | 9000 | 2.8328 | | 2.8131 | 0.49 | 10000 | 2.8052 | | 2.7855 | 0.54 | 11000 | 2.7809 | | 2.7623 | 0.59 | 12000 | 2.7551 | | 2.737 | 0.64 | 13000 | 2.7301 | | 2.7102 | 0.69 | 14000 | 2.7039 | | 2.686 | 0.74 | 15000 | 2.6771 | | 2.6554 | 0.79 | 16000 | 2.6495 | | 2.6273 | 0.83 | 17000 | 2.6208 | | 2.5984 | 0.88 | 18000 | 2.5911 | | 2.5693 | 0.93 | 19000 | 2.5611 | | 2.5391 | 0.98 | 20000 | 2.5366 | ### Framework versions - Transformers 4.39.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
Chaitanya798800/videomae-large_5class_UCFCrime
Chaitanya798800
2024-04-01T09:33:13Z
6
0
transformers
[ "transformers", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-large", "base_model:finetune:MCG-NJU/videomae-large", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2024-03-31T18:09:20Z
--- license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-large tags: - generated_from_trainer metrics: - accuracy - precision - recall model-index: - name: videomae-large_5class_UCFCrime results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-large_5class_UCFCrime This model is a fine-tuned version of [MCG-NJU/videomae-large](https://huggingface.co/MCG-NJU/videomae-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6826 - Accuracy: 0.7523 - Precision: 0.7853 - Recall: 0.7523 - F1 Score: 0.7364 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 2850 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:--------:| | 1.5794 | 0.03 | 95 | 1.4947 | 0.4128 | 0.2370 | 0.4128 | 0.2975 | | 1.7243 | 1.03 | 190 | 1.3362 | 0.5321 | 0.4612 | 0.5321 | 0.4143 | | 1.6818 | 2.03 | 285 | 1.3317 | 0.3119 | 0.4356 | 0.3119 | 0.2364 | | 1.0727 | 3.03 | 380 | 1.1666 | 0.5046 | 0.5546 | 0.5046 | 0.4337 | | 1.1121 | 4.03 | 475 | 1.3545 | 0.4587 | 0.6563 | 0.4587 | 0.3678 | | 1.0062 | 5.03 | 570 | 1.4656 | 0.5963 | 0.5376 | 0.5963 | 0.5325 | | 1.2532 | 6.03 | 665 | 1.6516 | 0.5780 | 0.5596 | 0.5780 | 0.4982 | | 1.9184 | 7.03 | 760 | 1.5020 | 0.5872 | 0.7283 | 0.5872 | 0.5449 | | 1.0223 | 8.03 | 855 | 1.4417 | 0.5872 | 0.5636 | 0.5872 | 0.5027 | | 0.9406 | 9.03 | 950 | 1.9402 | 0.5780 | 0.5800 | 0.5780 | 0.4906 | | 1.3058 | 10.03 | 1045 | 1.7611 | 0.4679 | 0.6914 | 0.4679 | 0.4463 | | 0.9196 | 11.03 | 1140 | 1.0373 | 0.6330 | 0.5648 | 0.6330 | 0.5625 | | 0.4191 | 12.03 | 1235 | 0.9139 | 0.6789 | 0.7553 | 0.6789 | 0.6824 | | 0.4816 | 13.03 | 1330 | 1.0840 | 0.7248 | 0.8001 | 0.7248 | 0.7176 | | 1.0577 | 14.03 | 1425 | 0.9822 | 0.7339 | 0.7724 | 0.7339 | 0.7216 | | 0.719 | 15.03 | 1520 | 1.4597 | 0.6789 | 0.6954 | 0.6789 | 0.6715 | | 0.3427 | 16.03 | 1615 | 1.4807 | 0.6789 | 0.7114 | 0.6789 | 0.6818 | | 0.6303 | 17.03 | 1710 | 1.9664 | 0.6881 | 0.7482 | 0.6881 | 0.6604 | | 0.0025 | 18.03 | 1805 | 1.5750 | 0.7156 | 0.7403 | 0.7156 | 0.7067 | | 0.2404 | 19.03 | 1900 | 2.2045 | 0.6606 | 0.7090 | 0.6606 | 0.6292 | | 0.0313 | 20.03 | 1995 | 1.6007 | 0.7248 | 0.7776 | 0.7248 | 0.7091 | | 0.3372 | 21.03 | 2090 | 1.6536 | 0.7156 | 0.7811 | 0.7156 | 0.6864 | | 0.5431 | 22.03 | 2185 | 2.1961 | 0.6514 | 0.7559 | 0.6514 | 0.6161 | | 0.0003 | 23.03 | 2280 | 1.6826 | 0.7523 | 0.7853 | 0.7523 | 0.7364 | | 0.0013 | 24.03 | 2375 | 1.8359 | 0.7431 | 0.7906 | 0.7431 | 0.7291 | | 0.0025 | 25.03 | 2470 | 2.0891 | 0.6881 | 0.7433 | 0.6881 | 0.6611 | | 0.0153 | 26.03 | 2565 | 1.8442 | 0.7431 | 0.7855 | 0.7431 | 0.7296 | | 0.0147 | 27.03 | 2660 | 1.8483 | 0.7431 | 0.7816 | 0.7431 | 0.7243 | | 0.0002 | 28.03 | 2755 | 1.9301 | 0.7339 | 0.7765 | 0.7339 | 0.7150 | | 0.0004 | 29.03 | 2850 | 1.9169 | 0.7339 | 0.7765 | 0.7339 | 0.7150 | ### Framework versions - Transformers 4.39.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
jisukim8873/mistral-7B-alpaca-case-3-3
jisukim8873
2024-04-01T09:29:47Z
8
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-01T08:36:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ajunravi/distilgpt2-finetuned-wikitext2
ajunravi
2024-04-01T09:29:23Z
195
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:distilbert/distilgpt2", "base_model:finetune:distilbert/distilgpt2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-01T08:24:54Z
--- license: apache-2.0 base_model: distilgpt2 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.39.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
janis332/my_awesome_wnut_model
janis332
2024-04-01T09:28:32Z
114
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "token-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-04-01T01:21:02Z
--- license: apache-2.0 base_model: distilbert/distilbert-base-uncased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: my_awesome_wnut_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_wnut_model This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.1401 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 - Accuracy: 0.4444 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:| | No log | 1.0 | 1 | 2.2267 | 0.0 | 0.0 | 0.0 | 0.3333 | | No log | 2.0 | 2 | 2.1401 | 0.0 | 0.0 | 0.0 | 0.4444 | ### Framework versions - Transformers 4.39.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
Yorick/wo_cap_car
Yorick
2024-04-01T09:28:17Z
11
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-04-01T07:58:59Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - Yorick/wo_cap_car These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
MaziyarPanahi/Experiment28T3q-7B-GGUF
MaziyarPanahi
2024-04-01T09:25:25Z
51
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "base_model:chihoonlee10/T3Q-Mistral-Orca-Math-DPO", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:automerger/Experiment28T3q-7B", "base_model:quantized:automerger/Experiment28T3q-7B" ]
text-generation
2024-04-01T09:02:39Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - merge - mergekit - lazymergekit - automerger - base_model:chihoonlee10/T3Q-Mistral-Orca-Math-DPO - license:apache-2.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: Experiment28T3q-7B-GGUF base_model: automerger/Experiment28T3q-7B inference: false model_creator: automerger pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Experiment28T3q-7B-GGUF](https://huggingface.co/MaziyarPanahi/Experiment28T3q-7B-GGUF) - Model creator: [automerger](https://huggingface.co/automerger) - Original model: [automerger/Experiment28T3q-7B](https://huggingface.co/automerger/Experiment28T3q-7B) ## Description [MaziyarPanahi/Experiment28T3q-7B-GGUF](https://huggingface.co/MaziyarPanahi/Experiment28T3q-7B-GGUF) contains GGUF format model files for [automerger/Experiment28T3q-7B](https://huggingface.co/automerger/Experiment28T3q-7B). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/Experiment28T3q-7B-GGUF](https://huggingface.co/MaziyarPanahi/Experiment28T3q-7B-GGUF) and below it, a specific filename to download, such as: Experiment28T3q-7B-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/Experiment28T3q-7B-GGUF Experiment28T3q-7B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/Experiment28T3q-7B-GGUF](https://huggingface.co/MaziyarPanahi/Experiment28T3q-7B-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/Experiment28T3q-7B-GGUF Experiment28T3q-7B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Experiment28T3q-7B.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://github.com/abetlen/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Experiment28T3q-7B.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Experiment28T3q-7B.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
HeydarS/flan-t5-xxl_peft_v7
HeydarS
2024-04-01T09:21:28Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/flan-t5-xxl", "base_model:adapter:google/flan-t5-xxl", "region:us" ]
null
2024-04-01T09:21:19Z
--- library_name: peft base_model: google/flan-t5-xxl --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
GenVRadmin/AryaBhatta-GemmaOrca
GenVRadmin
2024-04-01T09:21:22Z
8
1
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-21T05:41:22Z
--- license: mit --- This model is finetuned from HuggingFaceH4/zephyr-7b-gemma-v0.1 and is finetuned on 9 Indian languages (Hindi, Tamil, Punjabi, Bengali, Gujarati, Oriya, Telugu, Kannada, Malayalam) plus English. To improve the resoning and maths skills, we first SFT tune the gemma on Microsoft's Orca datasets. We utilize Orca maths Hindi dataset: GenVRadmin/Aryabhatta-Orca-Maths-Hindi \ And original Orca maths dataset: microsoft/orca-math-word-problems-200k This pushes the MATHS score from 24.3 in Gemma-7B to 25.5 in Zephyr-Gemma and 31.6 in GemmaOrca. The model is then finetuned on GenVR's Samvaad datasets (GenVRadmin/Samvaad-Indic-Positive and GenVRadmin/Samvaad-Tamil-Mixtral and a subset of GenVRadmin/Samvaad-Mixed-Language-3). This is then finetuned on various open sourced datasets like: Telugu-LLM-Labs/yahma_alpaca_cleaned_telugu_filtered_and_romanized \ Telugu-LLM-Labs/teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized \ abhinand/tamil-alpaca \ Tensoic/airoboros-3.2_kn \ Tensoic/gpt-teacher_kn \ Tensoic/Alpaca-Gujarati \ HydraIndicLM/bengali_alpaca_dolly_67k \ Open-Orca/OpenOrca \ pankajmathur/alpaca_orca \ OdiaGenAI/Odia_Alpaca_instructions_52k \ OdiaGenAI/gpt-teacher-roleplay-odia-3k \ GenVRadmin/Samvaad-Punjabi-Mini \ pankajmathur/WizardLM_Orca The model achieves following scores on benchmarks: Model AGIEval GPT4All TruthfulQA BigBench Average ⬇️ \ AryaBhatta-GemmaOrca 35.9 72.26 53.85 40.35 50.59 \ zephyr-7b-beta 37.52 71.77 55.26 39.77 51.08 \ zephyr-7b-gemma-v0.1 34.22 66.37 52.19 37.10 47.47 \ mlabonne/Gemmalpaca-7B 21.6 40.87 44.85 30.49 34.45 \ google/gemma-7b-it 21.33 40.84 41.70 30.25 33.53 How to use:- ``` from peft import AutoPeftModelForCausalLM from transformers import AutoTokenizer model = AutoPeftModelForCausalLM.from_pretrained( "GenVRadmin/AryaBhatta-GemmaOrca", load_in_4bit = False, token = hf_token ) tokenizer = AutoTokenizer.from_pretrained("GenVRadmin/AryaBhatta-GemmaOrca") input_prompt = """ ### Instruction: {} ### Input: {} ### Response: {}""" input_text = input_prompt.format( "Answer this question about India.", # instruction "Who is the Prime Minister of India", # input "", # output - leave this blank for generation! ) inputs = tokenizer([input_text], return_tensors = "pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens = 300, use_cache = True) response = tokenizer.batch_decode(outputs)[0] ```
SpotLab/microfilariae_detection
SpotLab
2024-04-01T09:19:19Z
3
0
transformers
[ "transformers", "tflite", "mobilenetV2", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
null
2023-10-10T14:23:10Z
--- license: cc-by-nc-sa-4.0 --- This model is an object detection model trained with tensorflow object detection API, published with the paper [Edge Artificial Intelligence for real-time automatic quantification of filariasis in mobile microscopy](https://www.medrxiv.org/content/10.1101/2023.08.02.23293538v1) - Developed by: Spotlab - Model type: SSD mobilenet v2 - Classes: Microfilaria - Model input: image resized to 640 and normalized to with mean=127.5 and std = 127.5. - Datasets: - Training set: 700 field of view images (100 magnification) from 85 samples with 1965 microfilarias - Validation set: 173 field of view images (100 magnification) from 30 samples with 328 microfilarias - Test set: 453 field of view images (100 magnification) from 30 samples with 328 microfilarias - Performance: - On validation set: 88.17% precision, 91.62% recall, and 89.85% f1 score. - On test set: 94.14% precision, 91.90% recall, and 93.01% f1 score. Example detections ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6509bcfc7e0d56c2717248be/t0rnWqTt69hbG4zPwimcu.jpeg) ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6509bcfc7e0d56c2717248be/5i7H_eo_z7SFLAFrlL8-e.jpeg) The model is trained with the square inside the field of view instead with the full field of view. To ensure the model performace, use use this scale. You can create your own android app to run this model following this tutorial: [TensorFlow Lite Object Detection Android Demo ](https://github.com/tensorflow/examples/tree/master/lite/examples/object_detection/android)
smutuvi/wav2vec2-large-xlsr-sw_ndizi_782_NF4-April-1
smutuvi
2024-04-01T09:19:15Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:smutuvi/wav2vec2-large-xlsr-sw-common-voice-16", "base_model:adapter:smutuvi/wav2vec2-large-xlsr-sw-common-voice-16", "license:apache-2.0", "region:us" ]
null
2024-04-01T09:17:53Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer metrics: - wer base_model: smutuvi/wav2vec2-large-xlsr-sw-common-voice-16 model-index: - name: wav2vec2-large-xlsr-sw_ndizi_782_NF4-April-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-sw_ndizi_782_NF4-April-1 This model is a fine-tuned version of [smutuvi/wav2vec2-large-xlsr-sw-common-voice-16](https://huggingface.co/smutuvi/wav2vec2-large-xlsr-sw-common-voice-16) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.1139 - Wer: 0.9993 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.1072 | 1.0 | 135 | 3.4546 | 0.9993 | | 2.88 | 2.0 | 270 | 3.1342 | 0.9986 | | 2.9526 | 3.0 | 405 | 3.2622 | 1.0 | | 2.8646 | 4.0 | 540 | 3.0074 | 1.0 | | 2.8826 | 5.0 | 675 | 3.0303 | 0.9993 | | 2.8805 | 6.0 | 810 | 3.4972 | 1.0028 | | 2.8354 | 7.0 | 945 | 3.3143 | 1.0036 | | 2.8927 | 8.0 | 1080 | 3.0944 | 1.0 | | 2.8531 | 9.0 | 1215 | 3.2952 | 1.0178 | | 2.8569 | 10.0 | 1350 | 3.1139 | 0.9993 | ### Framework versions - PEFT 0.9.0 - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.18.0 - Tokenizers 0.15.1
blockblockblock/Faro-Yi-9B-200K-bpw6
blockblockblock
2024-04-01T09:18:38Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "zh", "en", "dataset:wenbopan/Fusang-v1", "dataset:wenbopan/OpenOrca-zh-20k", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "6-bit", "exl2", "region:us" ]
text-generation
2024-04-01T09:15:38Z
--- license: mit datasets: - wenbopan/Fusang-v1 - wenbopan/OpenOrca-zh-20k language: - zh - en --- ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/62cd3a3691d27e60db0698b0/s21sMRxRT56c5t4M15GBP.webp) **The Faro chat model focuses on practicality and long-context modeling. It handles various downstream tasks with higher quality, delivering stable and reliable results even when inputs contain lengthy documents or complex instructions. Faro seamlessly works in both English and Chinese.** # Faro-Yi-9B Faro-Yi-9B is an improved [Yi-9B-200K](https://huggingface.co/01-ai/Yi-9B-200K) with extensive instruction tuning on [Fusang-V1](https://huggingface.co/datasets/wenbopan/Fusang-v1). Compared to Yi-9B-200K, Faro-Yi-9B has gained greater capability in various downstream tasks and long-context modeling thanks to the large-scale synthetic data in Fusang-V1. ## How to Use Faro-Yi-9B uses chatml template. This make it easy to set up system prompt and multi-turn conversations. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained( model_path, device_map="cuda" ) tokenizer = AutoTokenizer.from_pretrained(model_path) messages = [ {"role": "system", "content": "You are a helpful assistant. Always answer with a short response."}, {"role": "user", "content": "Tell me what is Pythagorean theorem like you are a pirate."} ] input_ids = tokenizer.apply_chat_template( messages, tokenize=True, add_generation_prompt=True, return_tensors="pt", ).to(model.device) generated_ids = model.generate(input_ids, max_new_tokens=512, temperature=0.5) response = tokenizer.decode(generated_ids[0], skip_special_tokens=True) # Aye, matey! The Pythagorean theorem is a nautical rule that helps us find the length of the third side of a triangle. It's like this: if you have a triangle with two sides, you can find the length of the third side by squaring the two sides and then adding them together. The square root of that sum will give you the length of the third side! It's useful for sailing and navigating, so you always know how far you've traveled. Remember, it's all about the sum of squares, me hearties! ``` ## Performance Faro-Yi-9B enhances its ability compared to Yi-9B-200K in most dimensions, especially in long-range modeling and bilingual (English, Chinese) understanding. Faro is competitive among all open-sourced models at around 9B parameters. <details> <summary>Benchmark Results</summary> ### Fact-based Evaluation (Open LLM Leaderboard) | **Metric** | **MMLU** | **GSM8K** | **HellaSwag** | **TruthfulQA** | **Arc** | **Winogrande** | | -------------- | --------- | --------- | ------------- | -------------- | ----------- | -------------- | | **Yi-9B-200K** | 65.73 | 50.49 | 56.72 | 33.80 | 69.25 | 71.67 | | **Faro-Yi-9B** | **68.80** | **63.08** | **57.28** | **40.86** | **72.58** | 71.11 | ### Long-context Modeling ([LongBench](https://github.com/THUDM/LongBench)) | **Name** | **Average_zh** | **Average_en** | **Code Completion** | |----------------|----------------|----------------|---------------------| | **Yi-9B-200K** | 30.288 | 36.7071 | 72.2 | | **Faro-Yi-9B** | **41.092** | **40.9536** | 46.0 | <details> <summary>Score breakdown</summary> | **Name** | **Few-shot Learning_en** | **Synthetic Tasks_en** | **Single-Doc QA_en** | **Multi-Doc QA_en** | **Summarization_en** | **Few-shot Learning_zh** | **Synthetic Tasks_zh** | **Single-Doc QA_zh** | **Multi-Doc QA_zh** | **Summarization_zh** | |----------------|--------------------------|------------------------|----------------------|---------------------|----------------------|--------------------------|------------------------|----------------------|---------------------|----------------------| | **Yi-9B-200K** | 60.6 | 22.8 | 30.9 | 38.9 | 25.8 | 46.5 | 28.0 | 49.6 | 17.7 | 9.7 | | **Faro-Yi-9B** | **63.8** | **40.2** | **36.2** | 38.0 | **26.3** | 30.0 | **75.1** | **55.6** | **30.7** | **14.1** | </details> <!--### Performance on Preference TODO--> ### Bilingual Ability (CMMLU & MMLU) | **Name** | MMLU | **CMMLU** | | -------------- | --------- | --------- | | **Yi-9B-200K** | 65.73 | 71.97 | | **Faro-Yi-9B** | **68.80** | **73.28** | </details>
Sumail/Goat_Derrick19
Sumail
2024-04-01T09:18:26Z
91
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "mergekit", "merge", "conversational", "base_model:zzttbrdd/sn6_00s", "base_model:merge:zzttbrdd/sn6_00s", "base_model:zzttbrdd/sn6_05s", "base_model:merge:zzttbrdd/sn6_05s", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-04-01T09:16:55Z
--- base_model: - zzttbrdd/sn6_00s - zzttbrdd/sn6_05s library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [zzttbrdd/sn6_00s](https://huggingface.co/zzttbrdd/sn6_00s) * [zzttbrdd/sn6_05s](https://huggingface.co/zzttbrdd/sn6_05s) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: zzttbrdd/sn6_00s layer_range: [0, 24] - model: zzttbrdd/sn6_05s layer_range: [0, 24] merge_method: slerp base_model: zzttbrdd/sn6_00s parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
Sumail/Goat_Derrick18
Sumail
2024-04-01T09:15:24Z
91
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "mergekit", "merge", "conversational", "base_model:lxsure/Sniper_21", "base_model:merge:lxsure/Sniper_21", "base_model:zzttbrdd/sn6_00s", "base_model:merge:zzttbrdd/sn6_00s", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-04-01T09:13:56Z
--- base_model: - lxsure/Sniper_21 - zzttbrdd/sn6_00s library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [lxsure/Sniper_21](https://huggingface.co/lxsure/Sniper_21) * [zzttbrdd/sn6_00s](https://huggingface.co/zzttbrdd/sn6_00s) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: zzttbrdd/sn6_00s layer_range: [0, 24] - model: lxsure/Sniper_21 layer_range: [0, 24] merge_method: slerp base_model: zzttbrdd/sn6_00s parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
akameswa/mistral-7b-instruct-java-4bit
akameswa
2024-04-01T08:59:34Z
82
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/mistral-7b-v0.2-bnb-4bit", "base_model:quantized:unsloth/mistral-7b-v0.2-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-04-01T08:57:57Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft base_model: unsloth/mistral-7b-v0.2-bnb-4bit --- # Uploaded model - **Developed by:** akameswa - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Sumail/Goat_Derrick17
Sumail
2024-04-01T08:57:33Z
91
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "mergekit", "merge", "conversational", "base_model:coffiee/s26", "base_model:merge:coffiee/s26", "base_model:zzttbrdd/sn6_05s", "base_model:merge:zzttbrdd/sn6_05s", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-04-01T08:55:47Z
--- base_model: - zzttbrdd/sn6_05s - coffiee/s26 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [zzttbrdd/sn6_05s](https://huggingface.co/zzttbrdd/sn6_05s) * [coffiee/s26](https://huggingface.co/coffiee/s26) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: zzttbrdd/sn6_05s layer_range: [0, 24] - model: coffiee/s26 layer_range: [0, 24] merge_method: slerp base_model: coffiee/s26 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
MyMoodAI/basicmood
MyMoodAI
2024-04-01T08:56:14Z
3
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:EleutherAI/gpt-neo-1.3B", "base_model:adapter:EleutherAI/gpt-neo-1.3B", "region:us" ]
null
2024-03-30T05:41:13Z
--- library_name: peft base_model: EleutherAI/gpt-neo-1.3B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ``` from transformers import AutoModelForCausalLM, AutoTokenizer import torch from peft import PeftModel, PeftConfig import shelve model_name = "MyMoodAI/basicmood" adapters_name = "MyMoodAI/basicmood" torch.cuda.empty_cache() print(f"Starting to load the model {model_name} into memory") m = AutoModelForCausalLM.from_pretrained( model_name, #load_in_4bit=True, ) print(f"Loading the adapters from {adapters_name}") m = PeftModel.from_pretrained(m, adapters_name) tokenizer = AutoTokenizer.from_pretrained("MyMoodAI/basicmood", trust_remote_code=True) while True: mood_input = input("Mood: ") inputs = tokenizer("Prompt: %s ### Answer: "%mood_input, return_tensors="pt", return_attention_mask=True) outputs = m.generate(**inputs, max_length=24) print(tokenizer.batch_decode(outputs)[0]) ``` Train Proccedure at the very bottom ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> Classify Guilty, Anxious, Depressed states (low accuracy and rudimentary); trained on a generic dataset from the GEMENI API - **Developed by:** Emmanuel Nsanga (space and a communication channel on Slack. provided by (mainly the AI builders Club - thebuilderclub.org), Canberra Deep Learning and the Sydney Startup Hub - **Funded by**:**Emmanuel Nsanga & Roy Kwan [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] This model specifically (only for now) is English - **License:** [More Information Needed] Big Science RAILS - **Finetuned from model [optional]:** [More Information Needed] gpt-neo-1.3B ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Neede Model name: 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz d] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations Risks: Total inaccuracy and sensetive human emotions understanding. (Kudos - 'Crystal Pang') Limitations: Not a real undestanding of emotions - still need human feeback. Bias. Out of distrbution bias and model size. (Kudos Leo Chow) <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data GEMENI API Prompts - (Generate a 1000 samples of very simple guilty/anxious/depressed mood states of short sentences) <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure SFTTrainer (Kudos - Cheng Yu at Canberra DL) <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results 0.0007 loss (improved by HyperParam Opt.) [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] (Just under roughly 10 hours to fine-tune exactly and/or six months of Google Colab Pro+) - **Cloud Provider:** [More Information Needed] Google Colab Pro+ - **Compute Region:** [More Information Needed] Sydney - **Carbon Emitted:** [More Information Needed] Refer to Google Data Centre Emisions management ## Technical Specifications [optional] Trained for under two hours on one Epoch. ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] Google Colab Pro+, Vultr, AWS #### Hardware V100 High RAM (for Fine-tuning) CPU (Hardware) - 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz Octa-core (4GB - 4GB SWAP) [More Information Needed] #### Software [More Information Needed] ## Citation [optional] Eleuther.ai <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [email protected] [More Information Needed] ### Framework versions - PEFT 0.10.0 ``` !pip3 install -q -U google-generativeai import google.generativeai as genai GOOGLE_API_KEY = '' genai.configure(api_key=GOOGLE_API_KEY) model = genai.GenerativeModel('gemini-pro') response = model.generate_content("Generate a 1000 samples of very simple guilty mood states of short sentences", stream=True) response.resolve() guiltsamples = response.text.split('\n') response = model.generate_content("Generate a 1000 samples of very simple anxious mood states of short sentences", stream=True) response.resolve() anxioussamples = response.text.split('\n') response = model.generate_content("Generate a 1000 samples of very simple depressed mood states of short sentences", stream=True) response.resolve() depressedsamples = response.text.split('\n') guiltsamples = list(zip(guiltsamples, ["You're feeling guilty" for d in range(len(guiltsamples))])) anxioussamples = list(zip(anxioussamples, ["You're feeling anxious" for d in range(len(anxioussamples))])) depressedsamples = list(zip(depressedsamples, ["You're feeling depressed" for d in range(len(depressedsamples))])) data = guiltsamples + anxioussamples + depressedsamples from peft import PeftModel import pandas as pd import shelve from datasets import Dataset from transformers import AutoModelForCausalLM, AutoTokenizer, TrainingArguments, Trainer, DataCollatorForLanguageModeling, BitsAndBytesConfig from transformers import AutoModelForCausalLM import torch from datasets import load_dataset, Dataset import datasets from transformers import AutoModelForCausalLM, AutoTokenizer, TrainingArguments, Trainer, DataCollatorForLanguageModeling, BitsAndBytesConfig from peft import LoraConfig, get_peft_model from peft import prepare_model_for_kbit_training, LoraConfig, get_peft_model from trl import SFTTrainer, DataCollatorForCompletionOnlyLM from transformers import GPTNeoXForCausalLM, AutoTokenizer from transformers import get_scheduler torch.cuda.empty_cache() class TrainModel: def __init__(self, params, data, accu_epochs): self.quant_config = BitsAndBytesConfig( load_in_4bit=True, bnb_16bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) self.model = AutoModelForCausalLM.from_pretrained( "EleutherAI/gpt-neo-1.3B", quantization_config=self.quant_config, device_map="auto" ) self.tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/gpt-neo-1.3B", ) self.params = params self.data = data self.epochs = accu_epochs def lora_config(self): lora_config = LoraConfig( r=abs(int(self.params['r']*100)), lora_alpha=int(self.params['alpha']), target_modules=["Wqkv", "out_proj"], lora_dropout=int(self.params['dropout']), bias="none", task_type="CAUSAL_LM" ) print(self.params['r'], self.params['dropout'], self.params['alpha']) return(lora_config) def formatting_prompts_func(self, example): output_texts = [] for i in range(len(example['Prompt'])): text = f"### Question: {example['Prompt'][i]}\n ### Answer: {example['Completion'][i]}" output_texts.append(text) return(output_texts) def prepare_data(self): df = pd.DataFrame(self.data, columns=['Prompt', 'Completion']) data = Dataset.from_pandas(df) return(data) def training(self): print(abs(self.params['r'].item()*100)) bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.float16 ) self.model = get_peft_model(self.model, self.lora_config()) training_arguments = TrainingArguments( optim='paged_adamw_8bit', output_dir="Multi-lingual-finetuned-med-text", per_device_train_batch_size=4, gradient_accumulation_steps=4, lr_scheduler_type="cosine", save_strategy="epoch", logging_steps=100, max_steps=10000, warmup_steps=10, num_train_epochs=self.epochs, fp16=True ) self.tokenizer.pad_token = self.tokenizer.eos_token response_template = " ### Answer:" collator = DataCollatorForCompletionOnlyLM(response_template, tokenizer=self.tokenizer) trainer = SFTTrainer( model=self.model, train_dataset=self.prepare_data(), args=training_arguments, formatting_func=self.formatting_prompts_func, data_collator=collator ) trainer.train() trainer.state.log_history print(trainer.state.log_history) return(trainer.state.log_history[0]['loss']) class HyperParam: def __init__(self): self.meana = torch.Tensor([8]) self.stda = torch.Tensor([0.1]) self.meanr = torch.Tensor([16.]) self.stdr = torch.Tensor([1.]) self.meand = torch.Tensor([.25]) self.stdd = torch.Tensor([0.01]) self.lr = 0.5 self.accu_epochs = 1 def sample_params(self): alpha = torch.distributions.Normal(self.meana.unsqueeze(0), self.stda.unsqueeze(0)) dropout = torch.distributions.Normal(self.meand.unsqueeze(0), self.stdd.unsqueeze(0)) r = torch.distributions.Normal(self.meand.unsqueeze(0), self.stdd.unsqueeze(0)) return({'alpha': alpha.sample(), 'dropout': dropout.sample(), 'r': r.sample()}) def loss(self): Training = TrainModel(self.sample_params(), data, self.accu_epochs) loss = Training.training() return(12) def hyper(self): optimizer = torch.optim.Adagrad([self.meanr, self.stdr, self.meana, self.stda, self.meand, self.stdd], self.lr) while True: scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=100) scheduler.step() params = optimizer.param_groups params = params[0]['params'] optimizer.step(closure=self.loss) self.lr = scheduler.get_last_lr()[0] self.meanr = params[0] self.stdr = params[1] self.meana = params[2] self.stda = params[3] self.meand = params[4] self.stdd = params[5] self.accu_epochs+=1 Hyper = HyperParam() Hyper.hyper() ```
Mofe/emotiscan_model_2
Mofe
2024-04-01T08:43:50Z
108
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-31T13:01:39Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: emotiscan_model_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # emotiscan_model_2 This model is a fine-tuned version of [google-bert/bert-large-uncased](https://huggingface.co/google-bert/bert-large-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5213 - Accuracy: 0.8184 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.4986 | 1.0 | 4157 | 0.4979 | 0.8194 | | 0.453 | 2.0 | 8315 | 0.4985 | 0.8200 | | 0.4098 | 3.0 | 12471 | 0.5213 | 0.8184 | ### Framework versions - Transformers 4.29.0 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.13.3
junhyeok0415/new_model3
junhyeok0415
2024-04-01T08:43:40Z
0
0
peft
[ "peft", "safetensors", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2", "region:us" ]
null
2024-04-01T07:30:20Z
--- library_name: peft base_model: mistralai/Mistral-7B-Instruct-v0.2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.9.0
bibhudutta/mistral7b_lora8_epochs3_snIAST_enLatin
bibhudutta
2024-04-01T08:43:34Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "region:us" ]
null
2024-04-01T08:38:01Z
--- library_name: peft base_model: mistralai/Mistral-7B-v0.1 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.9.0
lxsure/Sniper_21
lxsure
2024-04-01T08:42:57Z
91
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-04-01T08:40:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
0x7o/BulgakovLM-3B
0x7o
2024-04-01T08:37:45Z
133
0
transformers
[ "transformers", "pytorch", "safetensors", "gptj", "text-generation", "ru", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-10-02T11:25:45Z
--- language: - ru license: apache-2.0 pipeline_tag: text-generation --- # BulgakovLM 3B A language model trained on Russian. May be suitable for further tuning. The 100 gigabyte dataset consisted primarily of web pages, books, poems, and prose. The model was trained over 2 epochs. Uses GPT-J architecture with a context window of 4k tokens. Trained thanks to a TRC grant on TPU-VM v3-8 # Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("0x7o/BulgakovLM-3B") model = AutoModelForCausalLM.from_pretrained("0x7o/BulgakovLM-3B") input_ids = tokenizer("Π˜ΡΠΊΡƒΡΡΡ‚Π²Π΅Π½Π½Ρ‹ΠΉ ΠΈΠ½Ρ‚Π΅Π»Π»Π΅ΠΊΡ‚ - это", return_tensors='pt').to(model.device)["input_ids"] output = model.generate(input_ids, max_new_tokens=48, do_sample=True, temperature=0.7) print(tokenizer.decode(output[0])) ``` Output: ``` Π˜ΡΠΊΡƒΡΡΡ‚Π²Π΅Π½Π½Ρ‹ΠΉ ΠΈΠ½Ρ‚Π΅Π»Π»Π΅ΠΊΡ‚ - это всСго-навсСго ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠΌΠ°, которая Π°Π½Π°Π»ΠΈΠ·ΠΈΡ€ΡƒΠ΅Ρ‚ Π΄Π°Π½Π½Ρ‹Π΅ ΠΈ Ρ€Π΅ΡˆΠ°Π΅Ρ‚, насколько Ρ‚ΠΎΡ‚ ΠΈΠ»ΠΈ ΠΈΠ½ΠΎΠΉ Π²Ρ‹Π±ΠΎΡ€ ΠΌΠΎΠΆΠ΅Ρ‚ ΠΎΠΊΠ°Π·Π°Ρ‚ΡŒΡΡ ΠΎΠΏΡ‚ΠΈΠΌΠ°Π»ΡŒΠ½Ρ‹ΠΌ. Как ΠΈ Π²ΠΎ всСх ΠΎΡΡ‚Π°Π»ΡŒΠ½Ρ‹Ρ… сфСрах чСловСчСской Π΄Π΅ΡΡ‚Π΅Π»ΡŒΠ½ΠΎΡΡ‚ΠΈ, Π² IT Π΅ΡΡ‚ΡŒ свои ΠΏΠ»ΡŽΡΡ‹ ΠΈ минусы. И Ссли Π² ΠΏΡ€ΠΎΡˆΠ»ΠΎΠΌ Π²Π΅ΠΊΠ΅ искусствСнный ΠΈΠ½Ρ‚Π΅Π»Π»Π΅ΠΊΡ‚ Π±Ρ‹Π» Ρ‡Π΅ΠΌ ``` # Evaluation The results are obtained through the Russian-language benchmark [MERA](https://mera.a-ai.ru/ru) Total score: 0.198 | Π—Π°Π΄Π°Ρ‡Π° | Π Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚ | ΠœΠ΅Ρ‚Ρ€ΠΈΠΊΠ° | |--------------|---------------|--------------------| | BPS | 0.44 | Accuracy | | LCS | 0.118 | Accuracy | | RCB | 0.333 / 0.167 | Avg. F1 / Accuracy | | USE | 0 | Grade Norm | | RWSD | 0.523 | Accuracy | | PARus | 0.498 | Accuracy | | ruTiE | 0.5 | Accuracy | | MultiQ | 0.059 / 0.007 | F1-score/EM | | ruMMLU | 0.25 | Accuracy | | CheGeKa | 0.006 / 0 | F1 / EM | | ruModAr | 0.001 | Accuracy | | SimpleAr | 0.001 | Accuracy | | ruMultiAr | 0.011 | Accuracy | | MathLogicQA | 0.245 | Accuracy | | ruHumanEval | 0 / 0 / 0 | pass@k | | ruWorldTree | 0.265 / 0.246 | Avg. F1 / Accuracy | | ruOpenBookQA | 0.24 / 0.221 | Avg. F1 / Accuracy |
devbuzz142/cp04-pretrain-gpt2-ALL-NNN4-split
devbuzz142
2024-04-01T08:32:54Z
182
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-01T08:31:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
KimByeongSu/gpt-neo-125m-cs-finetuning-70000-1
KimByeongSu
2024-04-01T08:32:43Z
109
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt_neo", "text-generation", "generated_from_trainer", "base_model:EleutherAI/gpt-neo-125m", "base_model:finetune:EleutherAI/gpt-neo-125m", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-04-01T08:12:07Z
--- license: mit base_model: EleutherAI/gpt-neo-125m tags: - generated_from_trainer model-index: - name: gpt-neo-125m-cs-finetuning-70000-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt-neo-125m-cs-finetuning-70000-1 This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.1890 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.4252 | 1.0 | 894 | 3.2582 | | 3.1477 | 2.0 | 1788 | 3.2028 | | 3.0496 | 3.0 | 2682 | 3.1890 | ### Framework versions - Transformers 4.36.2 - Pytorch 1.13.1+cu117 - Datasets 2.14.6 - Tokenizers 0.15.0
peulsilva/phrase-bert-setfit-50shots-ADE_CORPUS
peulsilva
2024-04-01T08:31:52Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-04-01T08:31:30Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # peulsilva/phrase-bert-setfit-50shots-ADE_CORPUS This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('peulsilva/phrase-bert-setfit-50shots-ADE_CORPUS') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('peulsilva/phrase-bert-setfit-50shots-ADE_CORPUS') model = AutoModel.from_pretrained('peulsilva/phrase-bert-setfit-50shots-ADE_CORPUS') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=peulsilva/phrase-bert-setfit-50shots-ADE_CORPUS) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 406 with parameters: ``` {'batch_size': 1, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': None}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
blockblockblock/Faro-Yi-9B-200K-bpw4.8
blockblockblock
2024-04-01T08:30:02Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "zh", "en", "dataset:wenbopan/Fusang-v1", "dataset:wenbopan/OpenOrca-zh-20k", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-04-01T08:27:56Z
--- license: mit datasets: - wenbopan/Fusang-v1 - wenbopan/OpenOrca-zh-20k language: - zh - en --- ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/62cd3a3691d27e60db0698b0/s21sMRxRT56c5t4M15GBP.webp) **The Faro chat model focuses on practicality and long-context modeling. It handles various downstream tasks with higher quality, delivering stable and reliable results even when inputs contain lengthy documents or complex instructions. Faro seamlessly works in both English and Chinese.** # Faro-Yi-9B Faro-Yi-9B is an improved [Yi-9B-200K](https://huggingface.co/01-ai/Yi-9B-200K) with extensive instruction tuning on [Fusang-V1](https://huggingface.co/datasets/wenbopan/Fusang-v1). Compared to Yi-9B-200K, Faro-Yi-9B has gained greater capability in various downstream tasks and long-context modeling thanks to the large-scale synthetic data in Fusang-V1. ## How to Use Faro-Yi-9B uses chatml template. This make it easy to set up system prompt and multi-turn conversations. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained( model_path, device_map="cuda" ) tokenizer = AutoTokenizer.from_pretrained(model_path) messages = [ {"role": "system", "content": "You are a helpful assistant. Always answer with a short response."}, {"role": "user", "content": "Tell me what is Pythagorean theorem like you are a pirate."} ] input_ids = tokenizer.apply_chat_template( messages, tokenize=True, add_generation_prompt=True, return_tensors="pt", ).to(model.device) generated_ids = model.generate(input_ids, max_new_tokens=512, temperature=0.5) response = tokenizer.decode(generated_ids[0], skip_special_tokens=True) # Aye, matey! The Pythagorean theorem is a nautical rule that helps us find the length of the third side of a triangle. It's like this: if you have a triangle with two sides, you can find the length of the third side by squaring the two sides and then adding them together. The square root of that sum will give you the length of the third side! It's useful for sailing and navigating, so you always know how far you've traveled. Remember, it's all about the sum of squares, me hearties! ``` ## Performance Faro-Yi-9B enhances its ability compared to Yi-9B-200K in most dimensions, especially in long-range modeling and bilingual (English, Chinese) understanding. Faro is competitive among all open-sourced models at around 9B parameters. <details> <summary>Benchmark Results</summary> ### Fact-based Evaluation (Open LLM Leaderboard) | **Metric** | **MMLU** | **GSM8K** | **HellaSwag** | **TruthfulQA** | **Arc** | **Winogrande** | | -------------- | --------- | --------- | ------------- | -------------- | ----------- | -------------- | | **Yi-9B-200K** | 65.73 | 50.49 | 56.72 | 33.80 | 69.25 | 71.67 | | **Faro-Yi-9B** | **68.80** | **63.08** | **57.28** | **40.86** | **72.58** | 71.11 | ### Long-context Modeling ([LongBench](https://github.com/THUDM/LongBench)) | **Name** | **Average_zh** | **Average_en** | **Code Completion** | |----------------|----------------|----------------|---------------------| | **Yi-9B-200K** | 30.288 | 36.7071 | 72.2 | | **Faro-Yi-9B** | **41.092** | **40.9536** | 46.0 | <details> <summary>Score breakdown</summary> | **Name** | **Few-shot Learning_en** | **Synthetic Tasks_en** | **Single-Doc QA_en** | **Multi-Doc QA_en** | **Summarization_en** | **Few-shot Learning_zh** | **Synthetic Tasks_zh** | **Single-Doc QA_zh** | **Multi-Doc QA_zh** | **Summarization_zh** | |----------------|--------------------------|------------------------|----------------------|---------------------|----------------------|--------------------------|------------------------|----------------------|---------------------|----------------------| | **Yi-9B-200K** | 60.6 | 22.8 | 30.9 | 38.9 | 25.8 | 46.5 | 28.0 | 49.6 | 17.7 | 9.7 | | **Faro-Yi-9B** | **63.8** | **40.2** | **36.2** | 38.0 | **26.3** | 30.0 | **75.1** | **55.6** | **30.7** | **14.1** | </details> <!--### Performance on Preference TODO--> ### Bilingual Ability (CMMLU & MMLU) | **Name** | MMLU | **CMMLU** | | -------------- | --------- | --------- | | **Yi-9B-200K** | 65.73 | 71.97 | | **Faro-Yi-9B** | **68.80** | **73.28** | </details>
0x0son0/s_61
0x0son0
2024-04-01T08:26:24Z
92
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-04-01T08:11:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
drchandra-code/practice-bert-finetuned-ner
drchandra-code
2024-04-01T08:25:01Z
109
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-04-01T08:13:50Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: practice-bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # practice-bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0664 - Precision: 0.9326 - Recall: 0.9507 - F1: 0.9416 - Accuracy: 0.9872 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.024 | 1.0 | 1756 | 0.0618 | 0.9232 | 0.9468 | 0.9349 | 0.9851 | | 0.0212 | 2.0 | 3512 | 0.0647 | 0.9344 | 0.9492 | 0.9417 | 0.9870 | | 0.0103 | 3.0 | 5268 | 0.0664 | 0.9326 | 0.9507 | 0.9416 | 0.9872 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
balaramas/enbnsumm-mT5-XLSumm
balaramas
2024-04-01T08:24:17Z
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "en", "base_model:csebuetnlp/mT5_multilingual_XLSum", "base_model:finetune:csebuetnlp/mT5_multilingual_XLSum", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-04-01T07:00:28Z
--- language: - en base_model: csebuetnlp/mT5_multilingual_XLSum tags: - generated_from_trainer model-index: - name: enbnsumm-mT5-XLSumm results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # enbnsumm-mT5-XLSumm This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.39.1 - Pytorch 2.3.0a0+ebedce2 - Datasets 2.18.0 - Tokenizers 0.15.2
Sumail/Goat_Derrick16
Sumail
2024-04-01T08:21:46Z
92
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "mergekit", "merge", "conversational", "arxiv:2306.01708", "base_model:coffiee/s25", "base_model:merge:coffiee/s25", "base_model:coffiee/s26", "base_model:merge:coffiee/s26", "base_model:zzttbrdd/sn6_05s", "base_model:merge:zzttbrdd/sn6_05s", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-04-01T08:20:06Z
--- base_model: - coffiee/s25 - coffiee/s26 - zzttbrdd/sn6_05s library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [coffiee/s25](https://huggingface.co/coffiee/s25) as a base. ### Models Merged The following models were included in the merge: * [coffiee/s26](https://huggingface.co/coffiee/s26) * [zzttbrdd/sn6_05s](https://huggingface.co/zzttbrdd/sn6_05s) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: coffiee/s25 # no parameters necessary for base model - model: coffiee/s26 parameters: density: 0.5 weight: 0.3 - model: zzttbrdd/sn6_05s parameters: density: 0.5 weight: 0.5 merge_method: ties base_model: coffiee/s25 parameters: normalize: true dtype: bfloat16 ```
atk1432/q-Taxi-v3
atk1432
2024-04-01T08:20:40Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-04-01T08:20:35Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage model = load_from_hub(repo_id="atk1432/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"])
nzdb70/ppo-LunarLander-v2
nzdb70
2024-04-01T08:19:09Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-29T11:07:53Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 259.50 +/- 24.26 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
blockblockblock/Faro-Yi-9B-200K-bpw4.6
blockblockblock
2024-04-01T08:14:26Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "zh", "en", "dataset:wenbopan/Fusang-v1", "dataset:wenbopan/OpenOrca-zh-20k", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-04-01T08:12:20Z
--- license: mit datasets: - wenbopan/Fusang-v1 - wenbopan/OpenOrca-zh-20k language: - zh - en --- ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/62cd3a3691d27e60db0698b0/s21sMRxRT56c5t4M15GBP.webp) **The Faro chat model focuses on practicality and long-context modeling. It handles various downstream tasks with higher quality, delivering stable and reliable results even when inputs contain lengthy documents or complex instructions. Faro seamlessly works in both English and Chinese.** # Faro-Yi-9B Faro-Yi-9B is an improved [Yi-9B-200K](https://huggingface.co/01-ai/Yi-9B-200K) with extensive instruction tuning on [Fusang-V1](https://huggingface.co/datasets/wenbopan/Fusang-v1). Compared to Yi-9B-200K, Faro-Yi-9B has gained greater capability in various downstream tasks and long-context modeling thanks to the large-scale synthetic data in Fusang-V1. ## How to Use Faro-Yi-9B uses chatml template. This make it easy to set up system prompt and multi-turn conversations. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained( model_path, device_map="cuda" ) tokenizer = AutoTokenizer.from_pretrained(model_path) messages = [ {"role": "system", "content": "You are a helpful assistant. Always answer with a short response."}, {"role": "user", "content": "Tell me what is Pythagorean theorem like you are a pirate."} ] input_ids = tokenizer.apply_chat_template( messages, tokenize=True, add_generation_prompt=True, return_tensors="pt", ).to(model.device) generated_ids = model.generate(input_ids, max_new_tokens=512, temperature=0.5) response = tokenizer.decode(generated_ids[0], skip_special_tokens=True) # Aye, matey! The Pythagorean theorem is a nautical rule that helps us find the length of the third side of a triangle. It's like this: if you have a triangle with two sides, you can find the length of the third side by squaring the two sides and then adding them together. The square root of that sum will give you the length of the third side! It's useful for sailing and navigating, so you always know how far you've traveled. Remember, it's all about the sum of squares, me hearties! ``` ## Performance Faro-Yi-9B enhances its ability compared to Yi-9B-200K in most dimensions, especially in long-range modeling and bilingual (English, Chinese) understanding. Faro is competitive among all open-sourced models at around 9B parameters. <details> <summary>Benchmark Results</summary> ### Fact-based Evaluation (Open LLM Leaderboard) | **Metric** | **MMLU** | **GSM8K** | **HellaSwag** | **TruthfulQA** | **Arc** | **Winogrande** | | -------------- | --------- | --------- | ------------- | -------------- | ----------- | -------------- | | **Yi-9B-200K** | 65.73 | 50.49 | 56.72 | 33.80 | 69.25 | 71.67 | | **Faro-Yi-9B** | **68.80** | **63.08** | **57.28** | **40.86** | **72.58** | 71.11 | ### Long-context Modeling ([LongBench](https://github.com/THUDM/LongBench)) | **Name** | **Average_zh** | **Average_en** | **Code Completion** | |----------------|----------------|----------------|---------------------| | **Yi-9B-200K** | 30.288 | 36.7071 | 72.2 | | **Faro-Yi-9B** | **41.092** | **40.9536** | 46.0 | <details> <summary>Score breakdown</summary> | **Name** | **Few-shot Learning_en** | **Synthetic Tasks_en** | **Single-Doc QA_en** | **Multi-Doc QA_en** | **Summarization_en** | **Few-shot Learning_zh** | **Synthetic Tasks_zh** | **Single-Doc QA_zh** | **Multi-Doc QA_zh** | **Summarization_zh** | |----------------|--------------------------|------------------------|----------------------|---------------------|----------------------|--------------------------|------------------------|----------------------|---------------------|----------------------| | **Yi-9B-200K** | 60.6 | 22.8 | 30.9 | 38.9 | 25.8 | 46.5 | 28.0 | 49.6 | 17.7 | 9.7 | | **Faro-Yi-9B** | **63.8** | **40.2** | **36.2** | 38.0 | **26.3** | 30.0 | **75.1** | **55.6** | **30.7** | **14.1** | </details> <!--### Performance on Preference TODO--> ### Bilingual Ability (CMMLU & MMLU) | **Name** | MMLU | **CMMLU** | | -------------- | --------- | --------- | | **Yi-9B-200K** | 65.73 | 71.97 | | **Faro-Yi-9B** | **68.80** | **73.28** | </details>
KaggleMasterX/RobertaTok
KaggleMasterX
2024-04-01T08:04:43Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-01T08:04:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
KaggleMasterX/Roberta
KaggleMasterX
2024-04-01T08:04:35Z
198
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-04-01T08:04:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zhuchi76/detr-resnet-50_finetuned_cppe5
zhuchi76
2024-04-01T07:59:03Z
189
0
transformers
[ "transformers", "tensorboard", "safetensors", "detr", "object-detection", "generated_from_trainer", "base_model:facebook/detr-resnet-50", "base_model:finetune:facebook/detr-resnet-50", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2024-04-01T06:02:07Z
--- license: apache-2.0 base_model: facebook/detr-resnet-50 tags: - generated_from_trainer model-index: - name: detr-resnet-50_finetuned_cppe5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
blockblockblock/Faro-Yi-9B-200K-bpw4.4
blockblockblock
2024-04-01T07:58:45Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "zh", "en", "dataset:wenbopan/Fusang-v1", "dataset:wenbopan/OpenOrca-zh-20k", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-04-01T07:56:53Z
--- license: mit datasets: - wenbopan/Fusang-v1 - wenbopan/OpenOrca-zh-20k language: - zh - en --- ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/62cd3a3691d27e60db0698b0/s21sMRxRT56c5t4M15GBP.webp) **The Faro chat model focuses on practicality and long-context modeling. It handles various downstream tasks with higher quality, delivering stable and reliable results even when inputs contain lengthy documents or complex instructions. Faro seamlessly works in both English and Chinese.** # Faro-Yi-9B Faro-Yi-9B is an improved [Yi-9B-200K](https://huggingface.co/01-ai/Yi-9B-200K) with extensive instruction tuning on [Fusang-V1](https://huggingface.co/datasets/wenbopan/Fusang-v1). Compared to Yi-9B-200K, Faro-Yi-9B has gained greater capability in various downstream tasks and long-context modeling thanks to the large-scale synthetic data in Fusang-V1. ## How to Use Faro-Yi-9B uses chatml template. This make it easy to set up system prompt and multi-turn conversations. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained( model_path, device_map="cuda" ) tokenizer = AutoTokenizer.from_pretrained(model_path) messages = [ {"role": "system", "content": "You are a helpful assistant. Always answer with a short response."}, {"role": "user", "content": "Tell me what is Pythagorean theorem like you are a pirate."} ] input_ids = tokenizer.apply_chat_template( messages, tokenize=True, add_generation_prompt=True, return_tensors="pt", ).to(model.device) generated_ids = model.generate(input_ids, max_new_tokens=512, temperature=0.5) response = tokenizer.decode(generated_ids[0], skip_special_tokens=True) # Aye, matey! The Pythagorean theorem is a nautical rule that helps us find the length of the third side of a triangle. It's like this: if you have a triangle with two sides, you can find the length of the third side by squaring the two sides and then adding them together. The square root of that sum will give you the length of the third side! It's useful for sailing and navigating, so you always know how far you've traveled. Remember, it's all about the sum of squares, me hearties! ``` ## Performance Faro-Yi-9B enhances its ability compared to Yi-9B-200K in most dimensions, especially in long-range modeling and bilingual (English, Chinese) understanding. Faro is competitive among all open-sourced models at around 9B parameters. <details> <summary>Benchmark Results</summary> ### Fact-based Evaluation (Open LLM Leaderboard) | **Metric** | **MMLU** | **GSM8K** | **HellaSwag** | **TruthfulQA** | **Arc** | **Winogrande** | | -------------- | --------- | --------- | ------------- | -------------- | ----------- | -------------- | | **Yi-9B-200K** | 65.73 | 50.49 | 56.72 | 33.80 | 69.25 | 71.67 | | **Faro-Yi-9B** | **68.80** | **63.08** | **57.28** | **40.86** | **72.58** | 71.11 | ### Long-context Modeling ([LongBench](https://github.com/THUDM/LongBench)) | **Name** | **Average_zh** | **Average_en** | **Code Completion** | |----------------|----------------|----------------|---------------------| | **Yi-9B-200K** | 30.288 | 36.7071 | 72.2 | | **Faro-Yi-9B** | **41.092** | **40.9536** | 46.0 | <details> <summary>Score breakdown</summary> | **Name** | **Few-shot Learning_en** | **Synthetic Tasks_en** | **Single-Doc QA_en** | **Multi-Doc QA_en** | **Summarization_en** | **Few-shot Learning_zh** | **Synthetic Tasks_zh** | **Single-Doc QA_zh** | **Multi-Doc QA_zh** | **Summarization_zh** | |----------------|--------------------------|------------------------|----------------------|---------------------|----------------------|--------------------------|------------------------|----------------------|---------------------|----------------------| | **Yi-9B-200K** | 60.6 | 22.8 | 30.9 | 38.9 | 25.8 | 46.5 | 28.0 | 49.6 | 17.7 | 9.7 | | **Faro-Yi-9B** | **63.8** | **40.2** | **36.2** | 38.0 | **26.3** | 30.0 | **75.1** | **55.6** | **30.7** | **14.1** | </details> <!--### Performance on Preference TODO--> ### Bilingual Ability (CMMLU & MMLU) | **Name** | MMLU | **CMMLU** | | -------------- | --------- | --------- | | **Yi-9B-200K** | 65.73 | 71.97 | | **Faro-Yi-9B** | **68.80** | **73.28** | </details>
atk1432/q-FrozenLake-v1-4x4-noSlippery
atk1432
2024-04-01T07:51:59Z
0
0
null
[ "FrozenLake-v1-4x4", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-04-01T07:17:34Z
--- tags: - FrozenLake-v1-4x4 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4 type: FrozenLake-v1-4x4 metrics: - type: mean_reward value: 0.77 +/- 0.42 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage model = load_from_hub(repo_id="atk1432/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"])
hereral/gemma-Clara-Finetuned
hereral
2024-04-01T07:49:12Z
130
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-01T07:42:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TedNtantame/Mistral_semtab
TedNtantame
2024-04-01T07:47:25Z
77
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2024-04-01T07:42:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
a-h-m-e-d/mistral_7b_v2_finetuned-v3
a-h-m-e-d
2024-04-01T07:47:18Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-01T07:46:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]