modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
Rupesh2/mistral_lora_model
Rupesh2
2024-01-26T22:29:16Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-01-26T22:29:08Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MaziyarPanahi/v1olet_marcoroni-go-bruins-merge-7B-Mistral-7B-Instruct-v0.1-GGUF
MaziyarPanahi
2024-01-26T22:27:34Z
43
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "v1olet/v1olet_marcoroni-go-bruins-merge-7B", "pytorch", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us", "base_model:MaziyarPanahi/v1olet_marcoroni-go-bruins-merge-7B-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/v1olet_marcoroni-go-bruins-merge-7B-Mistral-7B-Instruct-v0.1", "conversational" ]
text-generation
2024-01-26T22:18:49Z
--- license: apache-2.0 tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - Safetensors - text-generation-inference - merge - 7b - mistralai/Mistral-7B-Instruct-v0.1 - v1olet/v1olet_marcoroni-go-bruins-merge-7B - pytorch - en - license:apache-2.0 - autotrain_compatible - endpoints_compatible - region:us model_name: v1olet_marcoroni-go-bruins-merge-7B-Mistral-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/v1olet_marcoroni-go-bruins-merge-7B-Mistral-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/v1olet_marcoroni-go-bruins-merge-7B-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/v1olet_marcoroni-go-bruins-merge-7B-Mistral-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/v1olet_marcoroni-go-bruins-merge-7B-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/v1olet_marcoroni-go-bruins-merge-7B-Mistral-7B-Instruct-v0.1) ## Description [MaziyarPanahi/v1olet_marcoroni-go-bruins-merge-7B-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/v1olet_marcoroni-go-bruins-merge-7B-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/v1olet_marcoroni-go-bruins-merge-7B-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/v1olet_marcoroni-go-bruins-merge-7B-Mistral-7B-Instruct-v0.1). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/v1olet_marcoroni-go-bruins-merge-7B-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/v1olet_marcoroni-go-bruins-merge-7B-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: v1olet_marcoroni-go-bruins-merge-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/v1olet_marcoroni-go-bruins-merge-7B-Mistral-7B-Instruct-v0.1-GGUF v1olet_marcoroni-go-bruins-merge-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/v1olet_marcoroni-go-bruins-merge-7B-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/v1olet_marcoroni-go-bruins-merge-7B-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/v1olet_marcoroni-go-bruins-merge-7B-Mistral-7B-Instruct-v0.1-GGUF v1olet_marcoroni-go-bruins-merge-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m v1olet_marcoroni-go-bruins-merge-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./v1olet_marcoroni-go-bruins-merge-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./v1olet_marcoroni-go-bruins-merge-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
iandavis/code-llama-7b-text-to-sql
iandavis
2024-01-26T22:25:00Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:codellama/CodeLlama-7b-hf", "base_model:adapter:codellama/CodeLlama-7b-hf", "license:llama2", "region:us" ]
null
2024-01-26T21:15:14Z
--- license: llama2 library_name: peft tags: - trl - sft - generated_from_trainer datasets: - generator base_model: codellama/CodeLlama-7b-hf model-index: - name: code-llama-7b-text-to-sql results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # code-llama-7b-text-to-sql This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
terravivet/Judyv02
terravivet
2024-01-26T22:19:18Z
0
2
null
[ "tensorboard", "safetensors", "autotrain", "text-generation", "conversational", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T22:19:13Z
--- tags: - autotrain - text-generation widget: - text: "I love AutoTrain because " license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
intervitens/BagelMIsteryTour-v2-8x7B-3.7bpw-h6-exl2-rpcal
intervitens
2024-01-26T22:13:35Z
8
3
transformers
[ "transformers", "mixtral", "text-generation", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora", "base_model:merge:Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora", "base_model:Sao10K/Sensualize-Mixtral-bf16", "base_model:merge:Sao10K/Sensualize-Mixtral-bf16", "base_model:jondurbin/bagel-dpo-8x7b-v0.2", "base_model:merge:jondurbin/bagel-dpo-8x7b-v0.2", "base_model:mistralai/Mixtral-8x7B-Instruct-v0.1", "base_model:merge:mistralai/Mixtral-8x7B-Instruct-v0.1", "base_model:mistralai/Mixtral-8x7B-v0.1", "base_model:merge:mistralai/Mixtral-8x7B-v0.1", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T22:01:42Z
--- base_model: - mistralai/Mixtral-8x7B-v0.1 - jondurbin/bagel-dpo-8x7b-v0.2 - Sao10K/Sensualize-Mixtral-bf16 - mistralai/Mixtral-8x7B-v0.1 - Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora - mistralai/Mixtral-8x7B-Instruct-v0.1 tags: - mergekit - merge --- Quantized using 200 samples of 8192 tokens from an RP-oriented [PIPPA](https://huggingface.co/datasets/royallab/PIPPA-cleaned) dataset. For purposes other than RP, use quantizations done on a more general dataset. Requires ExllamaV2 version 0.0.11 and up. Original model link: [ycros/BagelMIsteryTour-v2-8x7B](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B) Original model README below. *** # BagelMIsteryTour-v2-8x7B [GGUF versions here](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B-GGUF) [AWQ versions here](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B-AWQ) Bagel, Mixtral Instruct, with extra spices. Give it a taste. Works with Alpaca prompt formats, though the Mistral format should also work. ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63044fa07373aacccd8a7c53/lxNMzXo_dq_JCP9YyUyaw.jpeg) I started experimenting around seeing if I could improve or fix some of Bagel's problems. Totally inspired by seeing how well Doctor-Shotgun's Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss worked (which is a LimaRP tune on top of base Mixtral, and then merged with Mixtral Instruct) - I decided to try some merges of Bagel with Mixtral Instruct as a result. Somehow I ended up here, Bagel, Mixtral Instruct, a little bit of LimaRP, a little bit of Sao10K's Sensualize. So far in my testing it's working very well, and while it seems fairly unaligned on a lot of stuff, it's maybe a little too aligned on a few specific things (which I think comes from Sensualize) - so that's something to play with in the future, or maybe try to DPO out. I've been running (temp last) minP 0.1, dynatemp 0.5-4, rep pen 1.07, rep range 1024. I've been testing Alpaca style Instruction/Response, and Instruction/Input/Response and those seem to work well, I expect Mistral's prompt format would also work well. You may need to add a stopping string on "{{char}}:" for RPs because it can sometimes duplicate those out in responses and waffle on. Seems to hold up and not fall apart at long contexts like Bagel and some other Mixtral tunes seem to, definitely doesn't seem prone to loopyness either. Can be pushed into extravagant prose if the scene/setting calls for it. __Version 2:__ lowered the mix of Sensualize. This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) as a base. ### Models Merged The following models were included in the merge: * [jondurbin/bagel-dpo-8x7b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-8x7b-v0.2) * [Sao10K/Sensualize-Mixtral-bf16](https://huggingface.co/Sao10K/Sensualize-Mixtral-bf16) * [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) + [Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora](https://huggingface.co/Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora) * [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: mistralai/Mixtral-8x7B-v0.1 models: - model: mistralai/Mixtral-8x7B-v0.1+Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora parameters: density: 0.5 weight: 0.2 - model: Sao10K/Sensualize-Mixtral-bf16 parameters: density: 0.5 weight: 0.1 - model: mistralai/Mixtral-8x7B-Instruct-v0.1 parameters: density: 0.6 weight: 1.0 - model: jondurbin/bagel-dpo-8x7b-v0.2 parameters: density: 0.6 weight: 0.5 merge_method: dare_ties dtype: bfloat16 ```
KukaRobotics/otonsfw
KukaRobotics
2024-01-26T22:12:29Z
1
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2024-01-26T22:12:25Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: '-' output: url: images/location_barn_floor_boobjob.jpg base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: null --- # otonsfw <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/KukaRobotics/otonsfw/tree/main) them in the Files & versions tab.
cloudyu/Mixtral_7Bx2_MoE_13B
cloudyu
2024-01-26T22:12:13Z
59
7
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "conversational", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-22T06:56:39Z
--- license: cc-by-nc-4.0 tags: - moe --- # Mixtral MOE 2x7B MOE the following models by mergekit: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [NurtureAI/neural-chat-7b-v3-16k](https://huggingface.co/NurtureAI/neural-chat-7b-v3-16k) * [meta-math/jondurbin/bagel-dpo-7b-v0.1](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1) Works and generates coherent text. gpu code example ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM import math ## v2 models model_path = "cloudyu/Mixtral_7Bx2_MoE_13B" tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True ) print(model) prompt = input("please input prompt:") while len(prompt) > 0: input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda") generation_output = model.generate( input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2 ) print(tokenizer.decode(generation_output[0])) prompt = input("please input prompt:") ``` CPU example ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM import math ## v2 models model_path = "cloudyu/Mixtral_7Bx2_MoE_13B" tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float32, device_map='cpu',local_files_only=False ) print(model) prompt = input("please input prompt:") while len(prompt) > 0: input_ids = tokenizer(prompt, return_tensors="pt").input_ids generation_output = model.generate( input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2 ) print(tokenizer.decode(generation_output[0])) prompt = input("please input prompt:") ```
MaziyarPanahi/neural-chat-7b-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF
MaziyarPanahi
2024-01-26T22:11:50Z
84
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "Intel/neural-chat-7b-v3-3-Slerp", "pytorch", "LLMs", "math", "Intel", "en", "dataset:meta-math/MetaMathQA", "dataset:Intel/orca_dpo_pairs", "arxiv:2309.12284", "base_model:meta-math/MetaMath-Mistral-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us", "base_model:MaziyarPanahi/neural-chat-7b-v3-3-Slerp-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/neural-chat-7b-v3-3-Slerp-Mistral-7B-Instruct-v0.1", "conversational" ]
text-generation
2024-01-26T22:02:49Z
--- license: apache-2.0 tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - Safetensors - text-generation-inference - merge - 7b - mistralai/Mistral-7B-Instruct-v0.1 - Intel/neural-chat-7b-v3-3-Slerp - pytorch - LLMs - math - Intel - en - dataset:meta-math/MetaMathQA - dataset:Intel/orca_dpo_pairs - arxiv:2309.12284 - base_model:meta-math/MetaMath-Mistral-7B - license:apache-2.0 - autotrain_compatible - endpoints_compatible - region:us model_name: neural-chat-7b-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/neural-chat-7b-v3-3-Slerp-Mistral-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/neural-chat-7b-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/neural-chat-7b-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/neural-chat-7b-v3-3-Slerp-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/neural-chat-7b-v3-3-Slerp-Mistral-7B-Instruct-v0.1) ## Description [MaziyarPanahi/neural-chat-7b-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/neural-chat-7b-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/neural-chat-7b-v3-3-Slerp-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/neural-chat-7b-v3-3-Slerp-Mistral-7B-Instruct-v0.1). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/neural-chat-7b-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/neural-chat-7b-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: neural-chat-7b-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/neural-chat-7b-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF neural-chat-7b-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/neural-chat-7b-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/neural-chat-7b-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/neural-chat-7b-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF neural-chat-7b-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m neural-chat-7b-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./neural-chat-7b-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./neural-chat-7b-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
hadrakey/gpt2-hh-sft
hadrakey
2024-01-26T22:10:44Z
11
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T17:24:11Z
--- license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: gpt2-sft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-sft This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.41e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1000 ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
KukaRobotics/Lollipop1nsfw
KukaRobotics
2024-01-26T22:09:15Z
2
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2024-01-26T22:09:06Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: '-' output: url: images/hacking_minigame_01.jpg base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: null --- # Lollipop1nsfw <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/KukaRobotics/Lollipop1nsfw/tree/main) them in the Files & versions tab.
MidnightRunner/Pear
MidnightRunner
2024-01-26T21:58:31Z
7
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "automatic1111", "ComfyUI", "en", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:cc0-1.0", "region:us" ]
text-to-image
2024-01-09T15:44:38Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora - automatic1111 - ComfyUI widget: - text: >- (a beautiful woman:1.5), (looking straight at the camera:1.3), hazel eye color, sweat, trembling, blush, a woman with black hair, black manicure, (goosebumps:1.2), skin pores, sweat, raytracing, specular lighting, shallow depth of field, 1girl, smiling, dimples, long voluminous hair, beautiful, from below, (looking at the viewer), (in frame), bare shoulders parameters: negative_prompt: >- (jpeg artifacts), (logo, text, watermark, username), extra fingers, mutated hands, poorly drawn hands, mutation, dehydrated, extra limbs, cloned face, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, censoring, censor, bar_censor, mosaic censor, nipples, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, (outdoor:1.6), backlight, (ugly:1.331), (duplicate:1.331), (morbid:1.21), (mutilated:1.21), (tranny:1.331), mutated hands, (poorly drawn hands:1.5), blurry, (bad anatomy:1.21), (bad proportions:1.331), extra limbs, (disfigured:1.331), (more than 2 nipples:1.331), (missing arms:1.331), (extra legs:1.331), (fused fingers:1.61051), (too many fingers:1.61051), (unclear eyes:1.331), lowers, bad hands, missing fingers, extra digit, (futa:1.1), bad hands, missing fingers, ((vagina)), ((out of focus body)), ((out of focus face)), ((((ugly)))), (((duplicate))), ((morbid)), ((mutilated)), [out of frame], extra fingers, mutated hands, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), out of frame, ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck))), drawn by bad-artist, sketch by bad-artist-anime, (bad_prompt:0.8), (artist name, signature, watermark:1.4), (ugly:1.2), (worst quality, poor details:1.4), bad-hands-5, badhandv4, blurry, strabismus, wrong finger, incomplete hand output: url: images/black hot.png base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a woman with black hair, 1girl, woman license: cc0-1.0 language: - en metrics: - character --- # Pear <Gallery /> ## Model description Embarking my first LoRa research As my research purposes to build base model, creating a model using a 3D sculpting approach similar to retrain the model versioning, with a distinctive emphasis on realism, like the contours of a hourglass body and face features. It does not depict anyone real person. When using for ADetailer or anything with facial improvements, recommend to mention: `a woman with black hair` Sampling method DPM++ 2M Karras DPM++ 2M SDE Karras Sampling Steps 25-45 CFG Scale: 7 Clip Skip 2 Recommended Upscaler Settings: Hires. fix Upscaler 4x-UltraSharp Hires steps: 10-20 Hires upscale: 1.5-2 Denoising strength: ~ 0.3-0.5 For better results use Hires.fix for better Results! Use ADetailer for a better face with <lora:Pear_v1:0.8>! ## Trigger words You should use `a woman with black hair` to trigger the image generation. You should use `1girl` to trigger the image generation. You should use `woman` to trigger the image generation. You should use `medium shot` portrait shot, or extreme closeup' to trigger the image generation. You should use `walk1nth3hallways` to initiate the image generation process with hallways in the background in a horizontal size. You should use `a woman in a bikini is sitting on a boat in the water` to initiate the image generation process with a horizontal size. ## Download model Weights for this model are available in Safetensors format. [Download](/MidnightRunner/Pear/tree/main) them in the Files & versions tab.
Gerti/bert-base-multilingual-cased-finetuned-twitter_sentiment
Gerti
2024-01-26T21:55:54Z
13
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-multilingual-cased", "base_model:finetune:google-bert/bert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-23T11:12:26Z
--- license: apache-2.0 base_model: bert-base-multilingual-cased tags: - generated_from_trainer model-index: - name: bert-base-multilingual-cased-finetuned-twitter_sentiment results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-multilingual-cased-finetuned-twitter_sentiment This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0045 - F1-score: 0.9985 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1-score | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1961 | 1.0 | 1080 | 0.0873 | 0.9819 | | 0.0918 | 2.0 | 2160 | 0.0252 | 0.9935 | | 0.0737 | 3.0 | 3240 | 0.0073 | 0.9985 | | 0.0298 | 4.0 | 4320 | 0.0087 | 0.9981 | | 0.01 | 5.0 | 5400 | 0.0045 | 0.9985 | ### Framework versions - Transformers 4.37.1 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
andysalerno/cloudymixtral_sft_7Bx2_MoE_0.2
andysalerno
2024-01-26T21:44:16Z
5
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T21:39:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TheBloke/Etheria-55b-v0.1-GPTQ
TheBloke
2024-01-26T21:38:31Z
23
4
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:SteelStorage/Etheria-55b-v0.1", "base_model:quantized:SteelStorage/Etheria-55b-v0.1", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2024-01-26T16:41:50Z
--- base_model: Steelskull/Etheria-55b-v0.1 inference: false model_creator: Steel model_name: Etheria 55B v0.1 model_type: yi prompt_template: '<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ' quantized_by: TheBloke tags: - mergekit - merge --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Etheria 55B v0.1 - GPTQ - Model creator: [Steel](https://huggingface.co/Steelskull) - Original model: [Etheria 55B v0.1](https://huggingface.co/Steelskull/Etheria-55b-v0.1) <!-- description start --> # Description This repo contains GPTQ model files for [Steel's Etheria 55B v0.1](https://huggingface.co/Steelskull/Etheria-55b-v0.1). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Etheria-55b-v0.1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Etheria-55b-v0.1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Etheria-55b-v0.1-GGUF) * [Steel's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Steelskull/Etheria-55b-v0.1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` <!-- prompt-template end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models. These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KoboldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; if you know of others, please let me know! <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Etheria-55b-v0.1-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 29.23 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Etheria-55b-v0.1-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 30.28 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Etheria-55b-v0.1-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 33.48 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/Etheria-55b-v0.1-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 22.39 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. | | [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Etheria-55b-v0.1-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 23.39 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. | | [gptq-3bit-32g-actorder_True](https://huggingface.co/TheBloke/Etheria-55b-v0.1-GPTQ/tree/gptq-3bit-32g-actorder_True) | 3 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 26.43 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/Etheria-55b-v0.1-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Etheria-55b-v0.1-GPTQ:gptq-4bit-128g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `Etheria-55b-v0.1-GPTQ`: ```shell mkdir Etheria-55b-v0.1-GPTQ huggingface-cli download TheBloke/Etheria-55b-v0.1-GPTQ --local-dir Etheria-55b-v0.1-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Etheria-55b-v0.1-GPTQ huggingface-cli download TheBloke/Etheria-55b-v0.1-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir Etheria-55b-v0.1-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir Etheria-55b-v0.1-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Etheria-55b-v0.1-GPTQ --local-dir Etheria-55b-v0.1-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/Etheria-55b-v0.1-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Etheria-55b-v0.1-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Etheria-55b-v0.1-GPTQ:gptq-4bit-128g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Etheria-55b-v0.1-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Etheria-55b-v0.1-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ''' client = InferenceClient(endpoint_url) response = client.text_generation( prompt_template, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## Python code example: inference from this GPTQ model ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install --upgrade transformers optimum # If using PyTorch 2.1 + CUDA 12.x: pip3 install --upgrade auto-gptq # or, if using PyTorch 2.1 + CUDA 11.x: pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ ``` If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.5.1 pip3 install . ``` ### Example Python code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/Etheria-55b-v0.1-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-128g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Write a story about llamas" system_message = "You are a story writing assistant" prompt_template=f'''<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly. [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility. For a list of clients/servers, please see "Known compatible clients / servers", above. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Steel's Etheria 55B v0.1 # Steelskull/Etheria-55b-v0.1 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/RAhrbktyyVQxOR1np-9L2.png) ## Merge Details An attempt to make a functional goliath style merge to create a [Etheria] 55b-200k with two yi-34b-200k models. This is a merge of both VerA and VerB of Etheria-55b (There numbers were surprisingly good), I then created a sacrificial 55B out of the most performant yi-34b-200k Model and performed a Dare_ties merge and equalize the model into its current state. ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using Merged-Etheria-55b as a base. ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: Merged-Etheria-55b models: - model: Sacr-Etheria-55b parameters: weight: [0.22, 0.113, 0.113, 0.113, 0.113, 0.113] density: 0.61 - model: Merged-Etheria-55b parameters: weight: [0.22, 0.113, 0.113, 0.113, 0.113, 0.113] density: 0.61 merge_method: dare_ties tokenizer_source: union parameters: int8_mask: true dtype: bfloat16 ```
ocordeiro/w2v-bert-2.0-portuguese-colab-CV16.0
ocordeiro
2024-01-26T21:36:18Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2-bert", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_16_0", "base_model:ocordeiro/w2v-bert-2.0-portuguese-colab-CV16.0", "base_model:finetune:ocordeiro/w2v-bert-2.0-portuguese-colab-CV16.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-01-26T13:36:02Z
--- base_model: ocordeiro/w2v-bert-2.0-portuguese-colab-CV16.0 tags: - generated_from_trainer datasets: - common_voice_16_0 model-index: - name: w2v-bert-2.0-portuguese-colab-CV16.0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # w2v-bert-2.0-portuguese-colab-CV16.0 This model is a fine-tuned version of [ocordeiro/w2v-bert-2.0-portuguese-colab-CV16.0](https://huggingface.co/ocordeiro/w2v-bert-2.0-portuguese-colab-CV16.0) on the common_voice_16_0 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.37.1 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
Mary8/koelectra-small-v2-distilled-korquad-384
Mary8
2024-01-26T21:33:54Z
110
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "roberta", "question-answering", "generated_from_trainer", "base_model:deepset/tinyroberta-squad2", "base_model:finetune:deepset/tinyroberta-squad2", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
question-answering
2024-01-18T18:17:51Z
--- license: cc-by-4.0 base_model: deepset/tinyroberta-squad2 tags: - generated_from_trainer model-index: - name: koelectra-small-v2-distilled-korquad-384 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # koelectra-small-v2-distilled-korquad-384 This model is a fine-tuned version of [deepset/tinyroberta-squad2](https://huggingface.co/deepset/tinyroberta-squad2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
MaziyarPanahi/koOpenChat-sft-Mistral-7B-Instruct-v0.1-GGUF
MaziyarPanahi
2024-01-26T21:32:51Z
45
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "maywell/koOpenChat-sft", "pytorch", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us", "license:apache-2.0", "base_model:MaziyarPanahi/koOpenChat-sft-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/koOpenChat-sft-Mistral-7B-Instruct-v0.1", "conversational" ]
text-generation
2024-01-26T21:23:48Z
--- license: apache-2.0 tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - Safetensors - text-generation-inference - merge - 7b - mistralai/Mistral-7B-Instruct-v0.1 - maywell/koOpenChat-sft - pytorch - license:cc-by-sa-4.0 - autotrain_compatible - endpoints_compatible - region:us - license:apache-2.0 model_name: koOpenChat-sft-Mistral-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/koOpenChat-sft-Mistral-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/koOpenChat-sft-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/koOpenChat-sft-Mistral-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/koOpenChat-sft-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/koOpenChat-sft-Mistral-7B-Instruct-v0.1) ## Description [MaziyarPanahi/koOpenChat-sft-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/koOpenChat-sft-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/koOpenChat-sft-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/koOpenChat-sft-Mistral-7B-Instruct-v0.1). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/koOpenChat-sft-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/koOpenChat-sft-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: koOpenChat-sft-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/koOpenChat-sft-Mistral-7B-Instruct-v0.1-GGUF koOpenChat-sft-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/koOpenChat-sft-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/koOpenChat-sft-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/koOpenChat-sft-Mistral-7B-Instruct-v0.1-GGUF koOpenChat-sft-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m koOpenChat-sft-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./koOpenChat-sft-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./koOpenChat-sft-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
KukaRobotics/rickralensfw
KukaRobotics
2024-01-26T21:31:20Z
23
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2024-01-26T21:31:16Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: '-' output: url: images/blank.jpg base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: alonadicktail, ass, penis, perineum, from behind, bent over, looking back --- # rickralensfw <Gallery /> ## Trigger words You should use `alonadicktail` to trigger the image generation. You should use `ass` to trigger the image generation. You should use `penis` to trigger the image generation. You should use `perineum` to trigger the image generation. You should use `from behind` to trigger the image generation. You should use `bent over` to trigger the image generation. You should use `looking back` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/KukaRobotics/rickralensfw/tree/main) them in the Files & versions tab.
LLMPrompGenAI/LLMPromptGen-AI
LLMPrompGenAI
2024-01-26T21:20:32Z
8
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "dataset:mosaicml/instruct-v3", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-01-26T21:03:03Z
--- library_name: transformers license: apache-2.0 datasets: - mosaicml/instruct-v3 pipeline_tag: text-generation --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
arbaazmohideen/LLMpromptGenAI
arbaazmohideen
2024-01-26T21:16:14Z
4
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-01-26T21:09:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
liminerity/Memgpt-slerp-7b-4
liminerity
2024-01-26T21:11:36Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "starsnatched/MemGPT", "liminerity/Memgpt-slerp-7b-3", "conversational", "base_model:liminerity/Memgpt-slerp-7b-3", "base_model:merge:liminerity/Memgpt-slerp-7b-3", "base_model:minchyeom/MemGPT", "base_model:merge:minchyeom/MemGPT", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T21:05:14Z
--- tags: - merge - mergekit - lazymergekit - starsnatched/MemGPT - liminerity/Memgpt-slerp-7b-3 base_model: - starsnatched/MemGPT - liminerity/Memgpt-slerp-7b-3 --- # Memgpt-slerp-7b-4 Memgpt-slerp-7b-4 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [starsnatched/MemGPT](https://huggingface.co/starsnatched/MemGPT) * [liminerity/Memgpt-slerp-7b-3](https://huggingface.co/liminerity/Memgpt-slerp-7b-3) ## 🧩 Configuration ```yaml slices: - sources: - model: starsnatched/MemGPT layer_range: [0, 32] - model: liminerity/Memgpt-slerp-7b-3 layer_range: [0, 32] merge_method: slerp base_model: starsnatched/MemGPT parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "liminerity/Memgpt-slerp-7b-4" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
alirzb/SeizureClassifier_AST_B_43275870
alirzb
2024-01-26T20:57:42Z
146
0
transformers
[ "transformers", "pytorch", "audio-spectrogram-transformer", "audio-classification", "generated_from_trainer", "base_model:MIT/ast-finetuned-audioset-10-10-0.4593", "base_model:finetune:MIT/ast-finetuned-audioset-10-10-0.4593", "license:bsd-3-clause", "endpoints_compatible", "region:us" ]
audio-classification
2024-01-26T19:29:01Z
--- license: bsd-3-clause base_model: MIT/ast-finetuned-audioset-10-10-0.4593 tags: - generated_from_trainer metrics: - accuracy model-index: - name: SeizureClassifier_AST_B_43275870 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SeizureClassifier_AST_B_43275870 This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0163 - Accuracy: 0.9975 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2929 | 0.99 | 44 | 0.1468 | 0.9455 | | 0.0774 | 1.99 | 88 | 0.1206 | 0.9629 | | 0.0424 | 2.98 | 132 | 0.0863 | 0.9728 | | 0.0359 | 4.0 | 177 | 0.0519 | 0.9802 | | 0.0526 | 4.99 | 221 | 0.0088 | 0.9975 | | 0.0028 | 5.99 | 265 | 0.0116 | 0.9926 | | 0.0051 | 6.98 | 309 | 0.0214 | 0.9926 | | 0.0004 | 8.0 | 354 | 0.0246 | 0.9975 | | 0.0002 | 8.99 | 398 | 0.0233 | 0.9975 | | 0.0 | 9.99 | 442 | 0.0205 | 0.9975 | | 0.0 | 10.98 | 486 | 0.0183 | 0.9975 | | 0.0 | 12.0 | 531 | 0.0171 | 0.9975 | | 0.0 | 12.99 | 575 | 0.0166 | 0.9975 | | 0.0 | 13.99 | 619 | 0.0164 | 0.9975 | | 0.0 | 14.92 | 660 | 0.0163 | 0.9975 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.2 - Datasets 2.16.1 - Tokenizers 0.13.3
nicher92/nicher_embedder_v2
nicher92
2024-01-26T20:52:09Z
47
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-01-26T20:51:55Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # nicher92/nicher_embedder_v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('nicher92/nicher_embedder_v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('nicher92/nicher_embedder_v2') model = AutoModel.from_pretrained('nicher92/nicher_embedder_v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=nicher92/nicher_embedder_v2) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 187071 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MSELoss.MSELoss` Parameters of the fit()-Method: ``` { "epochs": 2, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "eps": 1e-06, "lr": 8e-06 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 0, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
liminerity/Memgpt-slerp-7b-3
liminerity
2024-01-26T20:47:46Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "liminerity/Memgpt-slerp-7b-2", "liminerity/Mem-Beagle-7b-slerp-v6", "conversational", "base_model:liminerity/Mem-Beagle-7b-slerp-v6", "base_model:merge:liminerity/Mem-Beagle-7b-slerp-v6", "base_model:liminerity/Memgpt-slerp-7b-2", "base_model:merge:liminerity/Memgpt-slerp-7b-2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T20:41:45Z
--- tags: - merge - mergekit - lazymergekit - liminerity/Memgpt-slerp-7b-2 - liminerity/Mem-Beagle-7b-slerp-v6 base_model: - liminerity/Memgpt-slerp-7b-2 - liminerity/Mem-Beagle-7b-slerp-v6 --- # Memgpt-slerp-7b-3 Memgpt-slerp-7b-3 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [liminerity/Memgpt-slerp-7b-2](https://huggingface.co/liminerity/Memgpt-slerp-7b-2) * [liminerity/Mem-Beagle-7b-slerp-v6](https://huggingface.co/liminerity/Mem-Beagle-7b-slerp-v6) ## 🧩 Configuration ```yaml slices: - sources: - model: liminerity/Memgpt-slerp-7b-2 layer_range: [0, 32] - model: liminerity/Mem-Beagle-7b-slerp-v6 layer_range: [0, 32] merge_method: slerp base_model: liminerity/Memgpt-slerp-7b-2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "liminerity/Memgpt-slerp-7b-3" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Geksaida/mistral-7B-SysMLv2-test
Geksaida
2024-01-26T20:41:12Z
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-23T20:14:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
wezyai/naked
wezyai
2024-01-26T20:27:43Z
0
0
null
[ "region:us" ]
null
2024-01-26T20:18:33Z
import gradio as gr import random import os import json import time import shared import modules.config import fooocus_version import modules.html import modules.async_worker as worker import modules.constants as constants import modules.flags as flags import modules.gradio_hijack as grh import modules.advanced_parameters as advanced_parameters import modules.style_sorter as style_sorter import modules.meta_parser import args_manager import copy from modules.sdxl_styles import legal_style_names from modules.private_logger import get_current_html_path from modules.ui_gradio_extensions import reload_javascript from modules.auth import auth_enabled, check_auth def generate_clicked(*args): import ldm_patched.modules.model_management as model_management with model_management.interrupt_processing_mutex: model_management.interrupt_processing = False # outputs=[progress_html, progress_window, progress_gallery, gallery] execution_start_time = time.perf_counter() task = worker.AsyncTask(args=list(args)) finished = False yield gr.update(visible=True, value=modules.html.make_progress_html(1, 'Waiting for task to start ...')), \ gr.update(visible=True, value=None), \ gr.update(visible=False, value=None), \ gr.update(visible=False) worker.async_tasks.append(task) while not finished: time.sleep(0.01) if len(task.yields) > 0: flag, product = task.yields.pop(0) if flag == 'preview': # help bad internet connection by skipping duplicated preview if len(task.yields) > 0: # if we have the next item if task.yields[0][0] == 'preview': # if the next item is also a preview # print('Skipped one preview for better internet connection.') continue percentage, title, image = product yield gr.update(visible=True, value=modules.html.make_progress_html(percentage, title)), \ gr.update(visible=True, value=image) if image is not None else gr.update(), \ gr.update(), \ gr.update(visible=False) if flag == 'results': yield gr.update(visible=True), \ gr.update(visible=True), \ gr.update(visible=True, value=product), \ gr.update(visible=False) if flag == 'finish': yield gr.update(visible=False), \ gr.update(visible=False), \ gr.update(visible=False), \ gr.update(visible=True, value=product) finished = True execution_time = time.perf_counter() - execution_start_time print(f'Total time: {execution_time:.2f} seconds') return reload_javascript() title = f'Fooocus {fooocus_version.version}' if isinstance(args_manager.args.preset, str): title += ' ' + args_manager.args.preset shared.gradio_root = gr.Blocks( title=title, css=modules.html.css).queue() with shared.gradio_root: with gr.Row(): with gr.Column(scale=2): with gr.Row(): progress_window = grh.Image(label='Preview', show_label=True, visible=False, height=768, elem_classes=['main_view']) progress_gallery = gr.Gallery(label='Finished Images', show_label=True, object_fit='contain', height=768, visible=False, elem_classes=['main_view', 'image_gallery']) progress_html = gr.HTML(value=modules.html.make_progress_html(32, 'Progress 32%'), visible=False, elem_id='progress-bar', elem_classes='progress-bar') gallery = gr.Gallery(label='Gallery', show_label=False, object_fit='contain', visible=True, height=768, elem_classes=['resizable_area', 'main_view', 'final_gallery', 'image_gallery'], elem_id='final_gallery') with gr.Row(elem_classes='type_row'): with gr.Column(scale=17): prompt = gr.Textbox(show_label=False, placeholder="Type prompt here or paste parameters.", elem_id='positive_prompt', container=False, autofocus=True, elem_classes='type_row', lines=1024) default_prompt = modules.config.default_prompt if isinstance(default_prompt, str) and default_prompt != '': shared.gradio_root.load(lambda: default_prompt, outputs=prompt) with gr.Column(scale=3, min_width=0): generate_button = gr.Button(label="Generate", value="Generate", elem_classes='type_row', elem_id='generate_button', visible=True) load_parameter_button = gr.Button(label="Load Parameters", value="Load Parameters", elem_classes='type_row', elem_id='load_parameter_button', visible=False) skip_button = gr.Button(label="Skip", value="Skip", elem_classes='type_row_half', visible=False) stop_button = gr.Button(label="Stop", value="Stop", elem_classes='type_row_half', elem_id='stop_button', visible=False) def stop_clicked(): import ldm_patched.modules.model_management as model_management shared.last_stop = 'stop' model_management.interrupt_current_processing() return [gr.update(interactive=False)] * 2 def skip_clicked(): import ldm_patched.modules.model_management as model_management shared.last_stop = 'skip' model_management.interrupt_current_processing() return stop_button.click(stop_clicked, outputs=[skip_button, stop_button], queue=False, show_progress=False, _js='cancelGenerateForever') skip_button.click(skip_clicked, queue=False, show_progress=False) with gr.Row(elem_classes='advanced_check_row'): input_image_checkbox = gr.Checkbox(label='Input Image', value=False, container=False, elem_classes='min_check') advanced_checkbox = gr.Checkbox(label='Advanced', value=modules.config.default_advanced_checkbox, container=False, elem_classes='min_check') with gr.Row(visible=False) as image_input_panel: with gr.Tabs(): with gr.TabItem(label='Upscale or Variation') as uov_tab: with gr.Row(): with gr.Column(): uov_input_image = grh.Image(label='Drag above image to here', source='upload', type='numpy') with gr.Column(): uov_method = gr.Radio(label='Upscale or Variation:', choices=flags.uov_list, value=flags.disabled) gr.HTML('<a href="https://github.com/lllyasviel/Fooocus/discussions/390" target="_blank">\U0001F4D4 Document</a>') with gr.TabItem(label='Image Prompt') as ip_tab: with gr.Row(): ip_images = [] ip_types = [] ip_stops = [] ip_weights = [] ip_ctrls = [] ip_ad_cols = [] for _ in range(4): with gr.Column(): ip_image = grh.Image(label='Image', source='upload', type='numpy', show_label=False, height=300) ip_images.append(ip_image) ip_ctrls.append(ip_image) with gr.Column(visible=False) as ad_col: with gr.Row(): default_end, default_weight = flags.default_parameters[flags.default_ip] ip_stop = gr.Slider(label='Stop At', minimum=0.0, maximum=1.0, step=0.001, value=default_end) ip_stops.append(ip_stop) ip_ctrls.append(ip_stop) ip_weight = gr.Slider(label='Weight', minimum=0.0, maximum=2.0, step=0.001, value=default_weight) ip_weights.append(ip_weight) ip_ctrls.append(ip_weight) ip_type = gr.Radio(label='Type', choices=flags.ip_list, value=flags.default_ip, container=False) ip_types.append(ip_type) ip_ctrls.append(ip_type) ip_type.change(lambda x: flags.default_parameters[x], inputs=[ip_type], outputs=[ip_stop, ip_weight], queue=False, show_progress=False) ip_ad_cols.append(ad_col) ip_advanced = gr.Checkbox(label='Advanced', value=False, container=False) gr.HTML('* \"Image Prompt\" is powered by Fooocus Image Mixture Engine (v1.0.1). <a href="https://github.com/lllyasviel/Fooocus/discussions/557" target="_blank">\U0001F4D4 Document</a>') def ip_advance_checked(x): return [gr.update(visible=x)] * len(ip_ad_cols) + \ [flags.default_ip] * len(ip_types) + \ [flags.default_parameters[flags.default_ip][0]] * len(ip_stops) + \ [flags.default_parameters[flags.default_ip][1]] * len(ip_weights) ip_advanced.change(ip_advance_checked, inputs=ip_advanced, outputs=ip_ad_cols + ip_types + ip_stops + ip_weights, queue=False, show_progress=False) with gr.TabItem(label='Inpaint or Outpaint') as inpaint_tab: with gr.Row(): inpaint_input_image = grh.Image(label='Drag inpaint or outpaint image to here', source='upload', type='numpy', tool='sketch', height=500, brush_color="#FFFFFF", elem_id='inpaint_canvas') inpaint_mask_image = grh.Image(label='Mask Upload', source='upload', type='numpy', height=500, visible=False) with gr.Row(): inpaint_additional_prompt = gr.Textbox(placeholder="Describe what you want to inpaint.", elem_id='inpaint_additional_prompt', label='Inpaint Additional Prompt', visible=False) outpaint_selections = gr.CheckboxGroup(choices=['Left', 'Right', 'Top', 'Bottom'], value=[], label='Outpaint Direction') inpaint_mode = gr.Dropdown(choices=modules.flags.inpaint_options, value=modules.flags.inpaint_option_default, label='Method') example_inpaint_prompts = gr.Dataset(samples=modules.config.example_inpaint_prompts, label='Additional Prompt Quick List', components=[inpaint_additional_prompt], visible=False) gr.HTML('* Powered by Fooocus Inpaint Engine <a href="https://github.com/lllyasviel/Fooocus/discussions/414" target="_blank">\U0001F4D4 Document</a>') example_inpaint_prompts.click(lambda x: x[0], inputs=example_inpaint_prompts, outputs=inpaint_additional_prompt, show_progress=False, queue=False) with gr.TabItem(label='Describe') as desc_tab: with gr.Row(): with gr.Column(): desc_input_image = grh.Image(label='Drag any image to here', source='upload', type='numpy') with gr.Column(): desc_method = gr.Radio( label='Content Type', choices=[flags.desc_type_photo, flags.desc_type_anime], value=flags.desc_type_photo) desc_btn = gr.Button(value='Describe this Image into Prompt') gr.HTML('<a href="https://github.com/lllyasviel/Fooocus/discussions/1363" target="_blank">\U0001F4D4 Document</a>') switch_js = "(x) => {if(x){viewer_to_bottom(100);viewer_to_bottom(500);}else{viewer_to_top();} return x;}" down_js = "() => {viewer_to_bottom();}" input_image_checkbox.change(lambda x: gr.update(visible=x), inputs=input_image_checkbox, outputs=image_input_panel, queue=False, show_progress=False, _js=switch_js) ip_advanced.change(lambda: None, queue=False, show_progress=False, _js=down_js) current_tab = gr.Textbox(value='uov', visible=False) uov_tab.select(lambda: 'uov', outputs=current_tab, queue=False, _js=down_js, show_progress=False) inpaint_tab.select(lambda: 'inpaint', outputs=current_tab, queue=False, _js=down_js, show_progress=False) ip_tab.select(lambda: 'ip', outputs=current_tab, queue=False, _js=down_js, show_progress=False) desc_tab.select(lambda: 'desc', outputs=current_tab, queue=False, _js=down_js, show_progress=False) with gr.Column(scale=1, visible=modules.config.default_advanced_checkbox) as advanced_column: with gr.Tab(label='Setting'): performance_selection = gr.Radio(label='Performance', choices=modules.flags.performance_selections, value=modules.config.default_performance) aspect_ratios_selection = gr.Radio(label='Aspect Ratios', choices=modules.config.available_aspect_ratios, value=modules.config.default_aspect_ratio, info='width × height', elem_classes='aspect_ratios') image_number = gr.Slider(label='Image Number', minimum=1, maximum=modules.config.default_max_image_number, step=1, value=modules.config.default_image_number) negative_prompt = gr.Textbox(label='Negative Prompt', show_label=True, placeholder="Type prompt here.", info='Describing what you do not want to see.', lines=2, elem_id='negative_prompt', value=modules.config.default_prompt_negative) seed_random = gr.Checkbox(label='Random', value=True) image_seed = gr.Textbox(label='Seed', value=0, max_lines=1, visible=False) # workaround for https://github.com/gradio-app/gradio/issues/5354 def random_checked(r): return gr.update(visible=not r) def refresh_seed(r, seed_string): if r: return random.randint(constants.MIN_SEED, constants.MAX_SEED) else: try: seed_value = int(seed_string) if constants.MIN_SEED <= seed_value <= constants.MAX_SEED: return seed_value except ValueError: pass return random.randint(constants.MIN_SEED, constants.MAX_SEED) seed_random.change(random_checked, inputs=[seed_random], outputs=[image_seed], queue=False, show_progress=False) if not args_manager.args.disable_image_log: gr.HTML(f'<a href="file={get_current_html_path()}" target="_blank">\U0001F4DA History Log</a>') with gr.Tab(label='Style'): style_sorter.try_load_sorted_styles( style_names=legal_style_names, default_selected=modules.config.default_styles) style_search_bar = gr.Textbox(show_label=False, container=False, placeholder="\U0001F50E Type here to search styles ...", value="", label='Search Styles') style_selections = gr.CheckboxGroup(show_label=False, container=False, choices=copy.deepcopy(style_sorter.all_styles), value=copy.deepcopy(modules.config.default_styles), label='Selected Styles', elem_classes=['style_selections']) gradio_receiver_style_selections = gr.Textbox(elem_id='gradio_receiver_style_selections', visible=False) shared.gradio_root.load(lambda: gr.update(choices=copy.deepcopy(style_sorter.all_styles)), outputs=style_selections) style_search_bar.change(style_sorter.search_styles, inputs=[style_selections, style_search_bar], outputs=style_selections, queue=False, show_progress=False).then( lambda: None, _js='()=>{refresh_style_localization();}') gradio_receiver_style_selections.input(style_sorter.sort_styles, inputs=style_selections, outputs=style_selections, queue=False, show_progress=False).then( lambda: None, _js='()=>{refresh_style_localization();}') with gr.Tab(label='Model'): with gr.Group(): with gr.Row(): base_model = gr.Dropdown(label='Base Model (SDXL only)', choices=modules.config.model_filenames, value=modules.config.default_base_model_name, show_label=True) refiner_model = gr.Dropdown(label='Refiner (SDXL or SD 1.5)', choices=['None'] + modules.config.model_filenames, value=modules.config.default_refiner_model_name, show_label=True) refiner_switch = gr.Slider(label='Refiner Switch At', minimum=0.1, maximum=1.0, step=0.0001, info='Use 0.4 for SD1.5 realistic models; ' 'or 0.667 for SD1.5 anime models; ' 'or 0.8 for XL-refiners; ' 'or any value for switching two SDXL models.', value=modules.config.default_refiner_switch, visible=modules.config.default_refiner_model_name != 'None') refiner_model.change(lambda x: gr.update(visible=x != 'None'), inputs=refiner_model, outputs=refiner_switch, show_progress=False, queue=False) with gr.Group(): lora_ctrls = [] for i, (n, v) in enumerate(modules.config.default_loras): with gr.Row(): lora_model = gr.Dropdown(label=f'LoRA {i + 1}', choices=['None'] + modules.config.lora_filenames, value=n) lora_weight = gr.Slider(label='Weight', minimum=-2, maximum=2, step=0.01, value=v, elem_classes='lora_weight') lora_ctrls += [lora_model, lora_weight] with gr.Row(): model_refresh = gr.Button(label='Refresh', value='\U0001f504 Refresh All Files', variant='secondary', elem_classes='refresh_button') with gr.Tab(label='Advanced'): guidance_scale = gr.Slider(label='Guidance Scale', minimum=1.0, maximum=30.0, step=0.01, value=modules.config.default_cfg_scale, info='Higher value means style is cleaner, vivider, and more artistic.') sharpness = gr.Slider(label='Image Sharpness', minimum=0.0, maximum=30.0, step=0.001, value=modules.config.default_sample_sharpness, info='Higher value means image and texture are sharper.') gr.HTML('<a href="https://github.com/lllyasviel/Fooocus/discussions/117" target="_blank">\U0001F4D4 Document</a>') dev_mode = gr.Checkbox(label='Developer Debug Mode', value=False, container=False) with gr.Column(visible=False) as dev_tools: with gr.Tab(label='Debug Tools'): adm_scaler_positive = gr.Slider(label='Positive ADM Guidance Scaler', minimum=0.1, maximum=3.0, step=0.001, value=1.5, info='The scaler multiplied to positive ADM (use 1.0 to disable). ') adm_scaler_negative = gr.Slider(label='Negative ADM Guidance Scaler', minimum=0.1, maximum=3.0, step=0.001, value=0.8, info='The scaler multiplied to negative ADM (use 1.0 to disable). ') adm_scaler_end = gr.Slider(label='ADM Guidance End At Step', minimum=0.0, maximum=1.0, step=0.001, value=0.3, info='When to end the guidance from positive/negative ADM. ') refiner_swap_method = gr.Dropdown(label='Refiner swap method', value='joint', choices=['joint', 'separate', 'vae']) adaptive_cfg = gr.Slider(label='CFG Mimicking from TSNR', minimum=1.0, maximum=30.0, step=0.01, value=modules.config.default_cfg_tsnr, info='Enabling Fooocus\'s implementation of CFG mimicking for TSNR ' '(effective when real CFG > mimicked CFG).') sampler_name = gr.Dropdown(label='Sampler', choices=flags.sampler_list, value=modules.config.default_sampler) scheduler_name = gr.Dropdown(label='Scheduler', choices=flags.scheduler_list, value=modules.config.default_scheduler) generate_image_grid = gr.Checkbox(label='Generate Image Grid for Each Batch', info='(Experimental) This may cause performance problems on some computers and certain internet conditions.', value=False) overwrite_step = gr.Slider(label='Forced Overwrite of Sampling Step', minimum=-1, maximum=200, step=1, value=modules.config.default_overwrite_step, info='Set as -1 to disable. For developer debugging.') overwrite_switch = gr.Slider(label='Forced Overwrite of Refiner Switch Step', minimum=-1, maximum=200, step=1, value=modules.config.default_overwrite_switch, info='Set as -1 to disable. For developer debugging.') overwrite_width = gr.Slider(label='Forced Overwrite of Generating Width', minimum=-1, maximum=2048, step=1, value=-1, info='Set as -1 to disable. For developer debugging. ' 'Results will be worse for non-standard numbers that SDXL is not trained on.') overwrite_height = gr.Slider(label='Forced Overwrite of Generating Height', minimum=-1, maximum=2048, step=1, value=-1, info='Set as -1 to disable. For developer debugging. ' 'Results will be worse for non-standard numbers that SDXL is not trained on.') overwrite_vary_strength = gr.Slider(label='Forced Overwrite of Denoising Strength of "Vary"', minimum=-1, maximum=1.0, step=0.001, value=-1, info='Set as negative number to disable. For developer debugging.') overwrite_upscale_strength = gr.Slider(label='Forced Overwrite of Denoising Strength of "Upscale"', minimum=-1, maximum=1.0, step=0.001, value=-1, info='Set as negative number to disable. For developer debugging.') disable_preview = gr.Checkbox(label='Disable Preview', value=False, info='Disable preview during generation.') with gr.Tab(label='Control'): debugging_cn_preprocessor = gr.Checkbox(label='Debug Preprocessors', value=False, info='See the results from preprocessors.') skipping_cn_preprocessor = gr.Checkbox(label='Skip Preprocessors', value=False, info='Do not preprocess images. (Inputs are already canny/depth/cropped-face/etc.)') mixing_image_prompt_and_vary_upscale = gr.Checkbox(label='Mixing Image Prompt and Vary/Upscale', value=False) mixing_image_prompt_and_inpaint = gr.Checkbox(label='Mixing Image Prompt and Inpaint', value=False) controlnet_softness = gr.Slider(label='Softness of ControlNet', minimum=0.0, maximum=1.0, step=0.001, value=0.25, info='Similar to the Control Mode in A1111 (use 0.0 to disable). ') with gr.Tab(label='Canny'): canny_low_threshold = gr.Slider(label='Canny Low Threshold', minimum=1, maximum=255, step=1, value=64) canny_high_threshold = gr.Slider(label='Canny High Threshold', minimum=1, maximum=255, step=1, value=128) with gr.Tab(label='Inpaint'): debugging_inpaint_preprocessor = gr.Checkbox(label='Debug Inpaint Preprocessing', value=False) inpaint_disable_initial_latent = gr.Checkbox(label='Disable initial latent in inpaint', value=False) inpaint_engine = gr.Dropdown(label='Inpaint Engine', value=modules.config.default_inpaint_engine_version, choices=flags.inpaint_engine_versions, info='Version of Fooocus inpaint model') inpaint_strength = gr.Slider(label='Inpaint Denoising Strength', minimum=0.0, maximum=1.0, step=0.001, value=1.0, info='Same as the denoising strength in A1111 inpaint. ' 'Only used in inpaint, not used in outpaint. ' '(Outpaint always use 1.0)') inpaint_respective_field = gr.Slider(label='Inpaint Respective Field', minimum=0.0, maximum=1.0, step=0.001, value=0.618, info='The area to inpaint. ' 'Value 0 is same as "Only Masked" in A1111. ' 'Value 1 is same as "Whole Image" in A1111. ' 'Only used in inpaint, not used in outpaint. ' '(Outpaint always use 1.0)') inpaint_erode_or_dilate = gr.Slider(label='Mask Erode or Dilate', minimum=-64, maximum=64, step=1, value=0, info='Positive value will make white area in the mask larger, ' 'negative value will make white area smaller.' '(default is 0, always process before any mask invert)') inpaint_mask_upload_checkbox = gr.Checkbox(label='Enable Mask Upload', value=False) invert_mask_checkbox = gr.Checkbox(label='Invert Mask', value=False) inpaint_ctrls = [debugging_inpaint_preprocessor, inpaint_disable_initial_latent, inpaint_engine, inpaint_strength, inpaint_respective_field, inpaint_mask_upload_checkbox, invert_mask_checkbox, inpaint_erode_or_dilate] inpaint_mask_upload_checkbox.change(lambda x: gr.update(visible=x), inputs=inpaint_mask_upload_checkbox, outputs=inpaint_mask_image, queue=False, show_progress=False) with gr.Tab(label='FreeU'): freeu_enabled = gr.Checkbox(label='Enabled', value=False) freeu_b1 = gr.Slider(label='B1', minimum=0, maximum=2, step=0.01, value=1.01) freeu_b2 = gr.Slider(label='B2', minimum=0, maximum=2, step=0.01, value=1.02) freeu_s1 = gr.Slider(label='S1', minimum=0, maximum=4, step=0.01, value=0.99) freeu_s2 = gr.Slider(label='S2', minimum=0, maximum=4, step=0.01, value=0.95) freeu_ctrls = [freeu_enabled, freeu_b1, freeu_b2, freeu_s1, freeu_s2] adps = [disable_preview, adm_scaler_positive, adm_scaler_negative, adm_scaler_end, adaptive_cfg, sampler_name, scheduler_name, generate_image_grid, overwrite_step, overwrite_switch, overwrite_width, overwrite_height, overwrite_vary_strength, overwrite_upscale_strength, mixing_image_prompt_and_vary_upscale, mixing_image_prompt_and_inpaint, debugging_cn_preprocessor, skipping_cn_preprocessor, controlnet_softness, canny_low_threshold, canny_high_threshold, refiner_swap_method] adps += freeu_ctrls adps += inpaint_ctrls def dev_mode_checked(r): return gr.update(visible=r) dev_mode.change(dev_mode_checked, inputs=[dev_mode], outputs=[dev_tools], queue=False, show_progress=False) def model_refresh_clicked(): modules.config.update_all_model_names() results = [] results += [gr.update(choices=modules.config.model_filenames), gr.update(choices=['None'] + modules.config.model_filenames)] for i in range(5): results += [gr.update(choices=['None'] + modules.config.lora_filenames), gr.update()] return results model_refresh.click(model_refresh_clicked, [], [base_model, refiner_model] + lora_ctrls, queue=False, show_progress=False) performance_selection.change(lambda x: [gr.update(interactive=x != 'Extreme Speed')] * 11 + [gr.update(visible=x != 'Extreme Speed')] * 1, inputs=performance_selection, outputs=[ guidance_scale, sharpness, adm_scaler_end, adm_scaler_positive, adm_scaler_negative, refiner_switch, refiner_model, sampler_name, scheduler_name, adaptive_cfg, refiner_swap_method, negative_prompt ], queue=False, show_progress=False) advanced_checkbox.change(lambda x: gr.update(visible=x), advanced_checkbox, advanced_column, queue=False, show_progress=False) \ .then(fn=lambda: None, _js='refresh_grid_delayed', queue=False, show_progress=False) def inpaint_mode_change(mode): assert mode in modules.flags.inpaint_options # inpaint_additional_prompt, outpaint_selections, example_inpaint_prompts, # inpaint_disable_initial_latent, inpaint_engine, # inpaint_strength, inpaint_respective_field if mode == modules.flags.inpaint_option_detail: return [ gr.update(visible=True), gr.update(visible=False, value=[]), gr.Dataset.update(visible=True, samples=modules.config.example_inpaint_prompts), False, 'None', 0.5, 0.0 ] if mode == modules.flags.inpaint_option_modify: return [ gr.update(visible=True), gr.update(visible=False, value=[]), gr.Dataset.update(visible=False, samples=modules.config.example_inpaint_prompts), True, modules.config.default_inpaint_engine_version, 1.0, 0.0 ] return [ gr.update(visible=False, value=''), gr.update(visible=True), gr.Dataset.update(visible=False, samples=modules.config.example_inpaint_prompts), False, modules.config.default_inpaint_engine_version, 1.0, 0.618 ] inpaint_mode.input(inpaint_mode_change, inputs=inpaint_mode, outputs=[ inpaint_additional_prompt, outpaint_selections, example_inpaint_prompts, inpaint_disable_initial_latent, inpaint_engine, inpaint_strength, inpaint_respective_field ], show_progress=False, queue=False) ctrls = [ prompt, negative_prompt, style_selections, performance_selection, aspect_ratios_selection, image_number, image_seed, sharpness, guidance_scale ] ctrls += [base_model, refiner_model, refiner_switch] + lora_ctrls ctrls += [input_image_checkbox, current_tab] ctrls += [uov_method, uov_input_image] ctrls += [outpaint_selections, inpaint_input_image, inpaint_additional_prompt, inpaint_mask_image] ctrls += ip_ctrls state_is_generating = gr.State(False) def parse_meta(raw_prompt_txt, is_generating): loaded_json = None try: if '{' in raw_prompt_txt: if '}' in raw_prompt_txt: if ':' in raw_prompt_txt: loaded_json = json.loads(raw_prompt_txt) assert isinstance(loaded_json, dict) except: loaded_json = None if loaded_json is None: if is_generating: return gr.update(), gr.update(), gr.update() else: return gr.update(), gr.update(visible=True), gr.update(visible=False) return json.dumps(loaded_json), gr.update(visible=False), gr.update(visible=True) prompt.input(parse_meta, inputs=[prompt, state_is_generating], outputs=[prompt, generate_button, load_parameter_button], queue=False, show_progress=False) load_parameter_button.click(modules.meta_parser.load_parameter_button_click, inputs=[prompt, state_is_generating], outputs=[ advanced_checkbox, image_number, prompt, negative_prompt, style_selections, performance_selection, aspect_ratios_selection, overwrite_width, overwrite_height, sharpness, guidance_scale, adm_scaler_positive, adm_scaler_negative, adm_scaler_end, base_model, refiner_model, refiner_switch, sampler_name, scheduler_name, seed_random, image_seed, generate_button, load_parameter_button ] + lora_ctrls, queue=False, show_progress=False) generate_button.click(lambda: (gr.update(visible=True, interactive=True), gr.update(visible=True, interactive=True), gr.update(visible=False, interactive=False), [], True), outputs=[stop_button, skip_button, generate_button, gallery, state_is_generating]) \ .then(fn=refresh_seed, inputs=[seed_random, image_seed], outputs=image_seed) \ .then(advanced_parameters.set_all_advanced_parameters, inputs=adps) \ .then(fn=generate_clicked, inputs=ctrls, outputs=[progress_html, progress_window, progress_gallery, gallery]) \ .then(lambda: (gr.update(visible=True, interactive=True), gr.update(visible=False, interactive=False), gr.update(visible=False, interactive=False), False), outputs=[generate_button, stop_button, skip_button, state_is_generating]) \ .then(fn=lambda: None, _js='playNotification').then(fn=lambda: None, _js='refresh_grid_delayed') for notification_file in ['notification.ogg', 'notification.mp3']: if os.path.exists(notification_file): gr.Audio(interactive=False, value=notification_file, elem_id='audio_notification', visible=False) break def trigger_describe(mode, img): if mode == flags.desc_type_photo: from extras.interrogate import default_interrogator as default_interrogator_photo return default_interrogator_photo(img), ["Fooocus V2", "Fooocus Enhance", "Fooocus Sharp"] if mode == flags.desc_type_anime: from extras.wd14tagger import default_interrogator as default_interrogator_anime return default_interrogator_anime(img), ["Fooocus V2", "Fooocus Masterpiece"] return mode, ["Fooocus V2"] desc_btn.click(trigger_describe, inputs=[desc_method, desc_input_image], outputs=[prompt, style_selections], show_progress=True, queue=True) def dump_default_english_config(): from modules.localization import dump_english_config dump_english_config(grh.all_components) # dump_default_english_config() shared.gradio_root.launch( inbrowser=args_manager.args.in_browser, server_name=args_manager.args.listen, server_port=args_manager.args.port, share=args_manager.args.share, auth=check_auth if args_manager.args.share and auth_enabled else None, blocked_paths=[constants.AUTH_FILENAME] )
AswanthCManoj/azma-phi-2-instruct-structured
AswanthCManoj
2024-01-26T20:14:32Z
4
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "region:us" ]
null
2024-01-25T16:52:41Z
--- library_name: peft base_model: microsoft/phi-2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.2 ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.2
djmango/mistral-7b-v0.2-q4_0.gguf
djmango
2024-01-26T20:08:39Z
40
0
null
[ "gguf", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-01-26T19:41:48Z
--- license: apache-2.0 --- This is uploaded for Gravity: grav.ai (https://grav.ai/) All rights reserved to the original authors. Thank you Mistral & Meta AI team for the great work! We love you
MaziyarPanahi/ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF
MaziyarPanahi
2024-01-26T20:07:31Z
121
1
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "Severian/ANIMA-Phi-Neptune-Mistral-7B", "pytorch", "chemistry", "biology", "climate", "science", "philosophy", "nature", "ecology", "biomimicry", "fauna", "flora", "dataset:Severian/Biomimicry", "dataset:emrgnt-cmplxty/sciphi-textbooks-are-all-you-need", "dataset:fmars/wiki_stem", "dataset:fblgit/tree-of-knowledge", "dataset:Severian/Bio-Design-Process", "license:artistic-2.0", "autotrain_compatible", "endpoints_compatible", "region:us", "license:apache-2.0", "base_model:MaziyarPanahi/ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.1", "conversational" ]
text-generation
2024-01-26T19:58:20Z
--- license: apache-2.0 tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - Safetensors - text-generation-inference - merge - 7b - mistralai/Mistral-7B-Instruct-v0.1 - Severian/ANIMA-Phi-Neptune-Mistral-7B - pytorch - chemistry - biology - climate - science - philosophy - nature - ecology - biomimicry - fauna - flora - dataset:Severian/Biomimicry - dataset:emrgnt-cmplxty/sciphi-textbooks-are-all-you-need - dataset:fmars/wiki_stem - dataset:fblgit/tree-of-knowledge - dataset:Severian/Bio-Design-Process - license:artistic-2.0 - autotrain_compatible - endpoints_compatible - region:us - license:apache-2.0 model_name: ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.1) ## Description [MaziyarPanahi/ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.1). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./ANIMA-Phi-Neptune-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
Charritocdt/clasificador-muchocine
Charritocdt
2024-01-26T20:00:11Z
90
0
transformers
[ "transformers", "safetensors", "electra", "text-classification", "classification", "generated_from_trainer", "base_model:mrm8488/electricidad-base-discriminator", "base_model:finetune:mrm8488/electricidad-base-discriminator", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-26T19:59:36Z
--- base_model: mrm8488/electricidad-base-discriminator tags: - classification - generated_from_trainer metrics: - accuracy model-index: - name: clasificador-muchocine results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clasificador-muchocine This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4205 - Accuracy: 0.4335 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 388 | 1.3638 | 0.3845 | | 1.4377 | 2.0 | 776 | 1.3375 | 0.4258 | | 1.0198 | 3.0 | 1164 | 1.4205 | 0.4335 | ### Framework versions - Transformers 4.37.1 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
juanrodolfo/clasificador-muchocine
juanrodolfo
2024-01-26T19:59:29Z
90
0
transformers
[ "transformers", "safetensors", "electra", "text-classification", "classification", "generated_from_trainer", "base_model:mrm8488/electricidad-base-discriminator", "base_model:finetune:mrm8488/electricidad-base-discriminator", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-26T19:59:06Z
--- base_model: mrm8488/electricidad-base-discriminator tags: - classification - generated_from_trainer metrics: - accuracy model-index: - name: clasificador-muchocine results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clasificador-muchocine This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4925 - Accuracy: 0.4477 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 388 | 1.3416 | 0.4013 | | 1.3873 | 2.0 | 776 | 1.3512 | 0.4452 | | 0.9361 | 3.0 | 1164 | 1.4925 | 0.4477 | ### Framework versions - Transformers 4.37.1 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
bingo11/results
bingo11
2024-01-26T19:58:05Z
91
0
transformers
[ "transformers", "safetensors", "esm", "text-classification", "generated_from_trainer", "base_model:facebook/esm2_t6_8M_UR50D", "base_model:finetune:facebook/esm2_t6_8M_UR50D", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-26T19:57:56Z
--- license: mit base_model: facebook/esm2_t6_8M_UR50D tags: - generated_from_trainer metrics: - accuracy model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [facebook/esm2_t6_8M_UR50D](https://huggingface.co/facebook/esm2_t6_8M_UR50D) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2900 - Accuracy: 0.8281 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0497 | 1.0 | 3107 | 1.3970 | 0.8103 | | 0.9662 | 2.0 | 6214 | 1.3196 | 0.8237 | | 0.9135 | 3.0 | 9321 | 1.2900 | 0.8281 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.0 - Tokenizers 0.15.0
brushpenbob/DiffusionBee
brushpenbob
2024-01-26T19:57:35Z
0
2
null
[ "art", "kim jung gi", "digitalart", "en", "license:creativeml-openrail-m", "region:us" ]
null
2023-12-25T16:44:13Z
--- tags: - art - kim jung gi - digitalart license: creativeml-openrail-m language: - en ---
MaziyarPanahi/NeuralHermes-2.5-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF
MaziyarPanahi
2024-01-26T19:44:15Z
56
1
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "mlabonne/NeuralHermes-2.5-Mistral-7B", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "distillation", "dpo", "rlhf", "en", "dataset:mlabonne/chatml_dpo_pairs", "base_model:teknium/OpenHermes-2.5-Mistral-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us", "base_model:MaziyarPanahi/NeuralHermes-2.5-Mistral-7B-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/NeuralHermes-2.5-Mistral-7B-Mistral-7B-Instruct-v0.1", "conversational" ]
text-generation
2024-01-26T19:35:15Z
--- license: apache-2.0 tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - Safetensors - text-generation-inference - merge - 7b - mistralai/Mistral-7B-Instruct-v0.1 - mlabonne/NeuralHermes-2.5-Mistral-7B - instruct - finetune - chatml - gpt4 - synthetic data - distillation - dpo - rlhf - en - dataset:mlabonne/chatml_dpo_pairs - base_model:teknium/OpenHermes-2.5-Mistral-7B - license:apache-2.0 - autotrain_compatible - endpoints_compatible - region:us model_name: NeuralHermes-2.5-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/NeuralHermes-2.5-Mistral-7B-Mistral-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/NeuralHermes-2.5-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/NeuralHermes-2.5-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/NeuralHermes-2.5-Mistral-7B-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/NeuralHermes-2.5-Mistral-7B-Mistral-7B-Instruct-v0.1) ## Description [MaziyarPanahi/NeuralHermes-2.5-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/NeuralHermes-2.5-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/NeuralHermes-2.5-Mistral-7B-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/NeuralHermes-2.5-Mistral-7B-Mistral-7B-Instruct-v0.1). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/NeuralHermes-2.5-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/NeuralHermes-2.5-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: NeuralHermes-2.5-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/NeuralHermes-2.5-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF NeuralHermes-2.5-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/NeuralHermes-2.5-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/NeuralHermes-2.5-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/NeuralHermes-2.5-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF NeuralHermes-2.5-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m NeuralHermes-2.5-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./NeuralHermes-2.5-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./NeuralHermes-2.5-Mistral-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
shanhy/xlmroberta_clir_seed12_cross_translation_augmentation_val_kin_0.563
shanhy
2024-01-26T19:43:31Z
85
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-26T19:42:11Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer model-index: - name: xlmroberta_clir_seed12_back_translation_augmentation_val_kin results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmroberta_clir_seed12_back_translation_augmentation_val_kin This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0830 - Spearman Corr: 0.4707 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 128 - seed: 12 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Spearman Corr | |:-------------:|:-----:|:----:|:---------------:|:-------------:| | No log | 0.48 | 200 | 0.0356 | 0.5073 | | No log | 0.97 | 400 | 0.0890 | 0.4997 | | No log | 1.45 | 600 | 0.0440 | 0.5627 | | No log | 1.94 | 800 | 0.0423 | 0.5383 | | 0.039 | 2.42 | 1000 | 0.0346 | 0.5164 | | 0.039 | 2.91 | 1200 | 0.0587 | 0.5257 | | 0.039 | 3.39 | 1400 | 0.0655 | 0.4899 | | 0.039 | 3.88 | 1600 | 0.0499 | 0.5162 | | 0.0229 | 4.36 | 1800 | 0.0616 | 0.5267 | | 0.0229 | 4.85 | 2000 | 0.0587 | 0.4955 | | 0.0229 | 5.33 | 2200 | 0.0830 | 0.4707 | ### Framework versions - Transformers 4.37.1 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
TunahanGokcimen/albert-base-v2-cased-ner
TunahanGokcimen
2024-01-26T19:37:42Z
92
0
transformers
[ "transformers", "tensorboard", "safetensors", "albert", "token-classification", "generated_from_trainer", "base_model:albert/albert-base-v2", "base_model:finetune:albert/albert-base-v2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-01-26T19:06:56Z
--- license: apache-2.0 base_model: albert/albert-base-v2 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: albert-base-v2-cased-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-base-v2-cased-ner This model is a fine-tuned version of [albert/albert-base-v2](https://huggingface.co/albert/albert-base-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1912 - Precision: 0.7496 - Recall: 0.8064 - F1: 0.7770 - Accuracy: 0.9384 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.225 | 1.0 | 2078 | 0.2281 | 0.6778 | 0.7389 | 0.7070 | 0.9224 | | 0.1849 | 2.0 | 4156 | 0.1909 | 0.7194 | 0.8032 | 0.7590 | 0.9360 | | 0.1379 | 3.0 | 6234 | 0.1912 | 0.7496 | 0.8064 | 0.7770 | 0.9384 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
damao0000/videomae-base-finetuned-ucf101-subset
damao0000
2024-01-26T19:30:51Z
48
0
transformers
[ "transformers", "tensorboard", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base", "base_model:finetune:MCG-NJU/videomae-base", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2024-01-26T15:02:06Z
--- license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-ucf101-subset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-ucf101-subset This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2160 - Accuracy: 0.9286 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 600 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.0034 | 0.12 | 75 | 1.7205 | 0.4429 | | 1.0781 | 1.12 | 150 | 0.8103 | 0.6429 | | 0.5468 | 2.12 | 225 | 0.5467 | 0.6571 | | 0.2383 | 3.12 | 300 | 0.6070 | 0.8286 | | 0.1513 | 4.12 | 375 | 0.2822 | 0.9286 | | 0.0211 | 5.12 | 450 | 0.3155 | 0.9143 | | 0.012 | 6.12 | 525 | 0.1954 | 0.9286 | | 0.0048 | 7.12 | 600 | 0.2160 | 0.9286 | ### Framework versions - Transformers 4.37.1 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
ntc-ai/SDXL-LoRA-slider.striking-a-confident-pose
ntc-ai
2024-01-26T19:28:41Z
7
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
text-to-image
2024-01-26T19:28:37Z
--- language: - en thumbnail: "images/evaluate/striking a confident pose.../striking a confident pose_17_3.0.png" widget: - text: striking a confident pose output: url: images/striking a confident pose_17_3.0.png - text: striking a confident pose output: url: images/striking a confident pose_19_3.0.png - text: striking a confident pose output: url: images/striking a confident pose_20_3.0.png - text: striking a confident pose output: url: images/striking a confident pose_21_3.0.png - text: striking a confident pose output: url: images/striking a confident pose_22_3.0.png tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers license: "mit" inference: false instance_prompt: "striking a confident pose" base_model: "stabilityai/stable-diffusion-xl-base-1.0" --- # ntcai.xyz slider - striking a confident pose (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/striking a confident pose_17_-3.0.png" width=256 height=256 /> | <img src="images/striking a confident pose_17_0.0.png" width=256 height=256 /> | <img src="images/striking a confident pose_17_3.0.png" width=256 height=256 /> | | <img src="images/striking a confident pose_19_-3.0.png" width=256 height=256 /> | <img src="images/striking a confident pose_19_0.0.png" width=256 height=256 /> | <img src="images/striking a confident pose_19_3.0.png" width=256 height=256 /> | | <img src="images/striking a confident pose_20_-3.0.png" width=256 height=256 /> | <img src="images/striking a confident pose_20_0.0.png" width=256 height=256 /> | <img src="images/striking a confident pose_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` striking a confident pose ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.striking-a-confident-pose', weight_name='striking a confident pose.safetensors', adapter_name="striking a confident pose") # Activate the LoRA pipe.set_adapters(["striking a confident pose"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, striking a confident pose" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
linoyts/huggy_new
linoyts
2024-01-26T19:14:09Z
8
1
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-01-26T17:51:47Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: 'a lego toy of a <s0><s1> emoji' output: url: "image_0.png" - text: 'a lego toy of a <s0><s1> emoji' output: url: "image_1.png" - text: 'a lego toy of a <s0><s1> emoji' output: url: "image_2.png" - text: 'a lego toy of a <s0><s1> emoji' output: url: "image_3.png" base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a lego toy of a <s0><s1> emoji license: openrail++ --- # SDXL LoRA DreamBooth - linoyts/huggy_new <Gallery /> ## Model description ### These are linoyts/huggy_new LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`huggy_new.safetensors` here 💾](/linoyts/huggy_new/blob/main/huggy_new.safetensors)**. - Place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `<lora:huggy_new:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). - *Embeddings*: download **[`huggy_new_emb.safetensors` here 💾](/linoyts/huggy_new/blob/main/huggy_new_emb.safetensors)**. - Place it on it on your `embeddings` folder - Use it by adding `huggy_new_emb` to your prompt. For example, `a lego toy of a huggy_new_emb emoji` (you need both the LoRA and the embeddings as they were trained together for this LoRA) ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('linoyts/huggy_new', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='linoyts/huggy_new', filename='huggy_new_emb.safetensors' repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) image = pipeline('a lego toy of a <s0><s1> emoji').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Details All [Files & versions](/linoyts/huggy_new/tree/main). The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
xgiga/clasificador-muchocine
xgiga
2024-01-26T19:10:48Z
92
0
transformers
[ "transformers", "safetensors", "electra", "text-classification", "classification", "generated_from_trainer", "base_model:mrm8488/electricidad-base-discriminator", "base_model:finetune:mrm8488/electricidad-base-discriminator", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-26T19:10:30Z
--- base_model: mrm8488/electricidad-base-discriminator tags: - classification - generated_from_trainer metrics: - accuracy model-index: - name: clasificador-muchocine results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clasificador-muchocine This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4422 - Accuracy: 0.4348 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 388 | 1.3288 | 0.3897 | | 1.4055 | 2.0 | 776 | 1.3526 | 0.4297 | | 0.9988 | 3.0 | 1164 | 1.4422 | 0.4348 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
simonycl/data-selection-Llama-2-7b-sharegpt-KMeansCentroidDeita-0.05-lora-epoch-4
simonycl
2024-01-26T19:05:03Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-01-26T19:04:51Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
NoviokNova/Jane
NoviokNova
2024-01-26T19:03:50Z
0
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:apache-2.0", "region:us" ]
text-to-image
2024-01-26T19:03:48Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: '-' output: url: images/Без названия.jfif base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: null license: apache-2.0 --- # Jane the twilight <Gallery /> ## Model description 123 ## Download model [Download](/NoviokNova/Jane/tree/main) them in the Files & versions tab.
lujiashuai/marian-finetuned-kde4-en-to-zh
lujiashuai
2024-01-26T19:03:16Z
126
0
transformers
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:kde4", "base_model:Helsinki-NLP/opus-mt-en-zh", "base_model:finetune:Helsinki-NLP/opus-mt-en-zh", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-01-26T16:34:34Z
--- license: apache-2.0 base_model: Helsinki-NLP/opus-mt-en-zh tags: - translation - generated_from_trainer datasets: - kde4 metrics: - bleu model-index: - name: marian-finetuned-kde4-en-to-zh results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: kde4 type: kde4 config: en-zh_CN split: train args: en-zh_CN metrics: - name: Bleu type: bleu value: 40.36100175040232 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-zh This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.9344 - Bleu: 40.3610 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.0+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
danlindb/PPO-LunarLander-v2-unit8
danlindb
2024-01-26T19:02:37Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2024-01-26T19:01:51Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -47.47 +/- 14.41 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 2500000 'learning_rate': 0.0001 'num_envs': 12 'num_steps': 256 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.1 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'danlindb/PPO-LunarLander-v2-unit8' 'batch_size': 3072 'minibatch_size': 768} ```
YuriPaglierani/ppo-LunarLander-v2
YuriPaglierani
2024-01-26T18:59:21Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-26T18:59:04Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 266.00 +/- 15.53 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
KukaRobotics/BobsGame_nsfw
KukaRobotics
2024-01-26T18:55:14Z
2
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/sdxl-turbo", "base_model:adapter:stabilityai/sdxl-turbo", "region:us" ]
text-to-image
2024-01-26T18:55:11Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: '-' output: url: images/alice.png base_model: stabilityai/sdxl-turbo instance_prompt: null --- # BobsGame_nsfw <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/KukaRobotics/BobsGame_nsfw/tree/main) them in the Files & versions tab.
mtc/meta-llama-Llama-2-7b-hf-pubmed-summarization-1000-no-quantization-lora-full
mtc
2024-01-26T18:54:10Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-01-26T18:53:40Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
tenyaiida/RWBY_Datasets
tenyaiida
2024-01-26T18:54:10Z
0
1
null
[ "voice", "license:mit", "region:us" ]
null
2023-07-20T20:51:41Z
--- license: mit tags: - voice --- # Voice models for RWBY Characters <!-- Provide a quick summary of what the model is/does. --> Despite the name, there are no datasets here, only models. These are characters from RWBY. All models were trained with Mangio-Crepe. ## Model Details In this repo features about 5 characters: > Weiss Schnee > Blake Belladonna > Yang Xiao Long > Nora Valkyrie ### Warnings They may not come out as amazing as they should, so extra tweaking may be necessary. For example, Nora has a very high voice, and using it with lower voice models may not sound correct without pitch correction. ## Extra Notes Possible updates coming for Weiss, Blake, and Yang.
Nexesenex/one-man-army_UNA-34Beagles-32K-v1-iMat.GGUF
Nexesenex
2024-01-26T18:50:52Z
5
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-01-26T16:02:37Z
Model is likely broken : - one-man-army_UNA-34Beagles-32K-v1-b1973-iMat-c32_ch1000-Q4_K_M.gguf,-,Hellaswag,85.75,,400,2024-01-26 01:40:00,,34b,Yi,200000,,,GGUF,Fblgit,Nexesenex - one-man-army_UNA-34Beagles-32K-v1-b1973-iMat-c32_ch1000-Q4_K_M.gguf,-,Hellaswag_Bin,80.25,,400,2024-01-26 01:40:00,,34b,Yi,200000,,,GGUF,Fblgit,Nexesenex - one-man-army_UNA-34Beagles-32K-v1-b1973-iMat-c32_ch1000-Q4_K_M.gguf,-,Arc-Challenge,57.85953177,,299,2024-01-26 05:40:00,,34b,Yi,200000,,,GGUF,Fblgit,Nexesenex - one-man-army_UNA-34Beagles-32K-v1-b1973-iMat-c32_ch1000-Q4_K_M.gguf,-,Arc-Easy,77.89473684,,570,2024-01-26 05:40:00,,34b,Yi,200000,,,GGUF,Fblgit,Nexesenex - one-man-army_UNA-34Beagles-32K-v1-b1973-iMat-c32_ch1000-Q4_K_M.gguf,-,Thruthful-QA,47.73561812,,817,2024-01-26 05:40:00,,34b,Yi,200000,,,GGUF,Fblgit,Nexesenex - one-man-army_UNA-34Beagles-32K-v1-b1973-iMat-c32_ch1000-Q4_K_M.gguf,-,Winogrande,79.9526,,1267,2024-01-26 05:40:00,,34b,Yi,200000,,,GGUF,Fblgit,Nexesenex - one-man-army_UNA-34Beagles-32K-v1-b1973-iMat-c32_ch1000-Q4_K_M.gguf,-,wikitext,5.6462,512,512,2024-01-26 01:40:00,,34b,Yi,200000,,,GGUF,Fblgit,Nexesenex - one-man-army_UNA-34Beagles-32K-v1-b1973-iMat-c32_ch1000-Q4_K_M.gguf,-,wikitext,10.5926,4096,4096,2024-01-26 01:40:00,,34b,Yi,200000,,,GGUF,Fblgit,Nexesenex - one-man-army_UNA-34Beagles-32K-v1-b1973-iMat-c32_ch1000-Q4_K_M.gguf,-,MMLU,42.17252396,,313,2024-01-26 05:40:00,,34b,Yi,200000,,,GGUF,Fblgit,Nexesenex Perplexity at 4k is 10.5. The Maths version is even worst.
colinw2292/Writer12
colinw2292
2024-01-26T18:47:02Z
0
0
null
[ "tensorboard", "safetensors", "autotrain", "text-generation", "conversational", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T18:46:58Z
--- tags: - autotrain - text-generation widget: - text: "I love AutoTrain because " license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
TunahanGokcimen/distilbert-base-cased-ner
TunahanGokcimen
2024-01-26T18:42:24Z
93
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "token-classification", "generated_from_trainer", "dataset:bionlp2004", "base_model:distilbert/distilbert-base-cased", "base_model:finetune:distilbert/distilbert-base-cased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-01-26T18:24:51Z
--- license: apache-2.0 base_model: distilbert-base-cased tags: - generated_from_trainer datasets: - bionlp2004 metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-cased-ner results: - task: name: Token Classification type: token-classification dataset: name: bionlp2004 type: bionlp2004 config: bionlp2004 split: validation args: bionlp2004 metrics: - name: Precision type: precision value: 0.7436025257560651 - name: Recall type: recall value: 0.8058707005222402 - name: F1 type: f1 value: 0.7734854377322617 - name: Accuracy type: accuracy value: 0.9356447830424587 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-cased-ner This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the bionlp2004 dataset. It achieves the following results on the evaluation set: - Loss: 0.2048 - Precision: 0.7436 - Recall: 0.8059 - F1: 0.7735 - Accuracy: 0.9356 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2313 | 1.0 | 2078 | 0.2127 | 0.7120 | 0.7810 | 0.7449 | 0.9287 | | 0.184 | 2.0 | 4156 | 0.1992 | 0.7258 | 0.7999 | 0.7611 | 0.9353 | | 0.1471 | 3.0 | 6234 | 0.2048 | 0.7436 | 0.8059 | 0.7735 | 0.9356 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
WasdAbj/gpt2-imdb-pos-v2
WasdAbj
2024-01-26T18:38:11Z
92
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T18:37:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Crackerjax2/mistral_instruct_generation
Crackerjax2
2024-01-26T18:36:06Z
5
0
peft
[ "peft", "tensorboard", "safetensors", "mistral", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1", "license:apache-2.0", "region:us" ]
null
2024-01-24T21:36:39Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer datasets: - generator base_model: mistralai/Mistral-7B-Instruct-v0.1 model-index: - name: mistral_instruct_generation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral_instruct_generation This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 1.0225 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1172 | 1.54 | 20 | 1.9621 | | 1.7921 | 3.08 | 40 | 1.6326 | | 1.468 | 4.62 | 60 | 1.3290 | | 1.2425 | 6.15 | 80 | 1.1762 | | 1.1558 | 7.69 | 100 | 1.1167 | | 1.0912 | 9.23 | 120 | 1.0837 | | 1.0749 | 10.77 | 140 | 1.0632 | | 1.0534 | 12.31 | 160 | 1.0472 | | 1.0213 | 13.85 | 180 | 1.0332 | | 0.9962 | 15.38 | 200 | 1.0225 | ### Framework versions - PEFT 0.7.1 - Transformers 4.37.1 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
r1char9/ruT5-base-pls
r1char9
2024-01-26T18:34:45Z
93
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "Simplification", "Summarization", "paraphrase", "ru", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-26T17:19:24Z
--- language: - ru tags: - Simplification - Summarization - paraphrase --- Данная модель является дообучнной версией "ai-forever/ruT5-base" (ранее"sberbank-ai/ruT5-base") на задаче упрощения текста (text simplification). Набор данных был собран из корпуса "RuSimpleSentEval" (https://github.com/dialogue-evaluation/RuSimpleSentEval), а также "RuAdapt" (https://github.com/Digital-Pushkin-Lab/RuAdapt). Метрики обучения bleu:100.0 sari:28.699 fkgl:31.931 (из файла "train.logs") ``` import torch from transformers import AutoTokenizer, AutoModelForSeq2SeqLM model_name = "r1char9/ruT5-base-pls" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) input_text='''Война Советского Союза против фашистской Германии и её союзников (Венгрии, Италии, Румынии, Словакии, Хорватии, Финляндии, Японии); составная часть Второй мировой войны 1939-1945 гг.''' def example(source, model, tokenizer): """ Пример упрощения текста моделью :param source: Сложный текст :param model: Модель :param tokenizer: Токенизатор :return: Текст, упрощенный моделью """ print(f'SOURCE: {source}') input_ids, attention_mask = tokenizer(source, return_tensors = 'pt').values() with torch.no_grad(): output = model.generate(input_ids = input_ids.to(model.device), attention_mask = attention_mask.to(model.device), max_new_tokens=input_ids.size(1)*2, min_length=0) return tokenizer.decode(output.squeeze(0), skip_special_tokens = True) example(input_text, model, tokenizer) ```
Nexesenex/fblgit_UNA-34BeagleSimpleMath-32K-v1-iMat.GGUF
Nexesenex
2024-01-26T18:25:34Z
4
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-01-26T13:25:30Z
Model is likely broken : - fblgit_UNA-34BeagleSimpleMath-32K-v1-b1973-iMat-c32_ch1000-Q4_K_M.gguf,-,Hellaswag,86.25,,400,2024-01-26 01:40:00,,34b,Yi,200000,,,GGUF,Fblgit,Nexesenex, - fblgit_UNA-34BeagleSimpleMath-32K-v1-b1973-iMat-c32_ch1000-Q4_K_M.gguf,-,Hellaswag_Bin,81,,400,2024-01-26 01:40:00,,34b,Yi,200000,,,GGUF,Fblgit,Nexesenex - fblgit_UNA-34BeagleSimpleMath-32K-v1-b1973-iMat-c32_ch1000-Q4_K_M.gguf,-,Arc-Challenge,58.19397993,,299,2024-01-26 05:40:00,,34b,Yi,200000,,,GGUF,Fblgit,Nexesenex - fblgit_UNA-34BeagleSimpleMath-32K-v1-b1973-iMat-c32_ch1000-Q4_K_M.gguf,-,Arc-Easy,77.54385965,,570,2024-01-26 05:40:00,,34b,Yi,200000,,,GGUF,Fblgit,Nexesenex - fblgit_UNA-34BeagleSimpleMath-32K-v1-b1973-iMat-c32_ch1000-Q4_K_M.gguf,-,Thruthful-QA,48.71481028,,817,2024-01-26 05:40:00,,34b,Yi,200000,,,GGUF,Fblgit,Nexesenex, - fblgit_UNA-34BeagleSimpleMath-32K-v1-b1973-iMat-c32_ch1000-Q4_K_M.gguf,-,Winogrande,78.8477,,1267,2024-01-26 05:40:00,,34b,Yi,200000,,,GGUF,Fblgit,Nexesenex - fblgit_UNA-34BeagleSimpleMath-32K-v1-b1973-iMat-c32_ch1000-Q4_K_M.gguf,-,wikitext,5.6493,512,512,2024-01-26 01:40:00,,34b,Yi,200000,,,GGUF,Fblgit,Nexesenex - fblgit_UNA-34BeagleSimpleMath-32K-v1-b1973-iMat-c32_ch1000-Q4_K_M.gguf,-,wikitext,11.5559,4096,4096,2024-01-26 01:40:00,,34b,Yi,200000,,,GGUF,Fblgit,Nexesenex - fblgit_UNA-34BeagleSimpleMath-32K-v1-b1973-iMat-c32_ch1000-Q4_K_M.gguf,-,MMLU,42.49201278,,313,2024-01-26 05:40:00,,34b,Yi,200000,,,GGUF,Fblgit,Nexesenex Perplexity humps at 11.5 at 4096 ctx. I just leave that here.
LoneStriker/deepseek-coder-7b-instruct-v1.5-5.0bpw-h6-exl2
LoneStriker
2024-01-26T18:23:47Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T18:21:40Z
--- license: other license_name: deepseek license_link: LICENSE --- <p align="center"> <img width="1000px" alt="DeepSeek Coder" src="https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/pictures/logo.png?raw=true"> </p> <p align="center"><a href="https://www.deepseek.com/">[🏠Homepage]</a> | <a href="https://coder.deepseek.com/">[🤖 Chat with DeepSeek Coder]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/guoday/assert/blob/main/QR.png?raw=true">[Wechat(微信)]</a> </p> <hr> ### 1. Introduction of Deepseek-Coder-7B-Instruct v1.5 Deepseek-Coder-7B-Instruct-v1.5 is continue pre-trained from Deepseek-LLM 7B on 2T tokens by employing a window size of 4K and next token prediction objective, and then fine-tuned on 2B tokens of instruction data. - **Home Page:** [DeepSeek](https://deepseek.com/) - **Repository:** [deepseek-ai/deepseek-coder](https://github.com/deepseek-ai/deepseek-coder) - **Chat With DeepSeek Coder:** [DeepSeek-Coder](https://coder.deepseek.com/) ### 2. How to Use Here give some examples of how to use our model. #### Chat Model Inference ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-7b-instruct-v1.5", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-7b-instruct-v1.5", trust_remote_code=True).cuda() messages=[ { 'role': 'user', 'content': "write a quick sort algorithm in python."} ] inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device) outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) ``` ### 3. License This code repository is licensed under the MIT License. The use of DeepSeek Coder models is subject to the Model License. DeepSeek Coder supports commercial use. See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-coder/blob/main/LICENSE-MODEL) for more details. ### 4. Contact If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
macostaplata/clasificador-muchocine
macostaplata
2024-01-26T18:16:50Z
90
0
transformers
[ "transformers", "safetensors", "electra", "text-classification", "classification", "generated_from_trainer", "base_model:mrm8488/electricidad-base-discriminator", "base_model:finetune:mrm8488/electricidad-base-discriminator", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-26T18:01:02Z
--- base_model: mrm8488/electricidad-base-discriminator tags: - classification - generated_from_trainer metrics: - accuracy model-index: - name: clasificador-muchocine results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clasificador-muchocine This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3974 - Accuracy: 0.4310 ## Model description This model enables the classification of user movie reviews written in Spanish into 5 categories corresponding to the number of stars provided in the review (label_0 corresponds to 1 star and label_4 to 5 stars) ## Intended uses & limitations Please, note that this model has been trained with a Spanish dataset and may therefore not be suitable for classifying texts written in other languages. Also, note that the achieved accuracy in the evaluation tests is around 43%. ## Training and evaluation data The dataset employed was randomly divided for the following purposes: 80% training data - 20% test data. ## Training procedure The model has been trained following a 3-epoch cycle. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 388 | 1.3304 | 0.3948 | | 1.4184 | 2.0 | 776 | 1.3010 | 0.4297 | | 0.9847 | 3.0 | 1164 | 1.3974 | 0.4310 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
hewliyang/distilhubert-fintuned-gtzan
hewliyang
2024-01-26T18:11:54Z
146
0
transformers
[ "transformers", "tensorboard", "safetensors", "hubert", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:ntu-spml/distilhubert", "base_model:finetune:ntu-spml/distilhubert", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2024-01-26T18:11:49Z
--- license: apache-2.0 base_model: ntu-spml/distilhubert tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: distilhubert-finetuned-gtzan results: - task: name: Audio Classification type: audio-classification dataset: name: GTZAN type: marsyas/gtzan config: all split: train args: all metrics: - name: Accuracy type: accuracy value: 0.87 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-finetuned-gtzan This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.5438 - Accuracy: 0.87 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.0028 | 1.0 | 113 | 1.7975 | 0.52 | | 1.3164 | 2.0 | 226 | 1.2010 | 0.68 | | 1.0627 | 3.0 | 339 | 0.9305 | 0.76 | | 0.8829 | 4.0 | 452 | 0.8470 | 0.74 | | 0.6671 | 5.0 | 565 | 0.7021 | 0.78 | | 0.4053 | 6.0 | 678 | 0.6707 | 0.79 | | 0.4309 | 7.0 | 791 | 0.5799 | 0.84 | | 0.1564 | 8.0 | 904 | 0.5560 | 0.8 | | 0.2747 | 9.0 | 1017 | 0.5645 | 0.84 | | 0.1592 | 10.0 | 1130 | 0.5438 | 0.87 | ### Framework versions - Transformers 4.37.1 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.1
chezhian/llama2_finetuned_chatbot
chezhian
2024-01-26T18:11:11Z
0
0
null
[ "tensorboard", "generated_from_trainer", "region:us" ]
null
2024-01-26T18:02:37Z
--- tags: - generated_from_trainer model-index: - name: llama2_finetuned_chatbot results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama2_finetuned_chatbot This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.13.3
luizguilherme/q-FrozenLake-v1-4x4-noSlippery
luizguilherme
2024-01-26T18:07:03Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-26T18:07:01Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="luizguilherme/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Araeynn/open-llama-3b-v2-chat-coreml
Araeynn
2024-01-26T18:04:34Z
4
1
null
[ "coreml", "text-generation-inference", "text-generation", "en", "arxiv:1910.09700", "license:mit", "region:us" ]
text-generation
2024-01-26T15:44:07Z
--- license: mit language: - en pipeline_tag: text-generation tags: - text-generation-inference --- # Model Card for Model ID ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bartowski/WestLake-7B-v2-laser-exl2
bartowski
2024-01-26T18:00:17Z
0
1
null
[ "text-generation", "license:apache-2.0", "region:us" ]
text-generation
2024-01-26T17:40:53Z
--- license: apache-2.0 quantized_by: bartowski pipeline_tag: text-generation --- ## Exllama v2 Quantizations of WestLake-7B-v2-laser Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.12">turboderp's ExLlamaV2 v0.0.12</a> for quantization. # The "main" branch only contains the measurement.json, download one of the other branches for the model (see below) Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/cognitivecomputations/WestLake-7B-v2-laser | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description | | ----- | ---- | ------- | ------ | ------ | ------ | ------------ | | [8_0](https://huggingface.co/Bartowski/WestLake-7B-v2-laser-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/Bartowski/WestLake-7B-v2-laser-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. | | [5_0](https://huggingface.co/Bartowski/WestLake-7B-v2-laser-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. | | [4_25](https://huggingface.co/Bartowski/WestLake-7B-v2-laser-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. | | [3_5](https://huggingface.co/Bartowski/WestLake-7B-v2-laser-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. | ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/WestLake-7B-v2-laser-exl2 WestLake-7B-v2-laser-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download the `main` (only useful if you only care about measurement.json) branch to a folder called `WestLake-7B-v2-laser-exl2`: ```shell mkdir WestLake-7B-v2-laser-exl2 huggingface-cli download bartowski/WestLake-7B-v2-laser-exl2 --local-dir WestLake-7B-v2-laser-exl2 --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: Linux: ```shell mkdir WestLake-7B-v2-laser-exl2-6_5 huggingface-cli download bartowski/WestLake-7B-v2-laser-exl2 --revision 6_5 --local-dir WestLake-7B-v2-laser-exl2-6_5 --local-dir-use-symlinks False ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell mkdir WestLake-7B-v2-laser-exl2-6.5 huggingface-cli download bartowski/WestLake-7B-v2-laser-exl2 --revision 6_5 --local-dir WestLake-7B-v2-laser-exl2-6.5 --local-dir-use-symlinks False ``` Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
tanatapanun/fine-tuned-BioBART-50-epochs-1024-input-224-output
tanatapanun
2024-01-26T17:48:34Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:GanjinZero/biobart-base", "base_model:finetune:GanjinZero/biobart-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-25T19:23:49Z
--- license: apache-2.0 base_model: GanjinZero/biobart-base tags: - generated_from_trainer metrics: - rouge model-index: - name: fine-tuned-BioBART-50-epochs-1024-input-224-output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine-tuned-BioBART-50-epochs-1024-input-224-output This model is a fine-tuned version of [GanjinZero/biobart-base](https://huggingface.co/GanjinZero/biobart-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2443 - Rouge1: 0.1837 - Rouge2: 0.028 - Rougel: 0.1489 - Rougelsum: 0.1474 - Gen Len: 42.57 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 151 | 8.7928 | 0.0036 | 0.0 | 0.0035 | 0.0036 | 14.33 | | No log | 2.0 | 302 | 3.8946 | 0.0 | 0.0 | 0.0 | 0.0 | 4.27 | | No log | 3.0 | 453 | 1.2664 | 0.129 | 0.0304 | 0.1174 | 0.1169 | 15.04 | | 5.8095 | 4.0 | 604 | 1.1007 | 0.1115 | 0.0252 | 0.0884 | 0.0882 | 24.25 | | 5.8095 | 5.0 | 755 | 1.0410 | 0.1446 | 0.0292 | 0.1129 | 0.1126 | 50.47 | | 5.8095 | 6.0 | 906 | 0.9980 | 0.1189 | 0.029 | 0.0855 | 0.0856 | 36.31 | | 0.976 | 7.0 | 1057 | 0.9752 | 0.1261 | 0.0328 | 0.094 | 0.0944 | 33.23 | | 0.976 | 8.0 | 1208 | 0.9652 | 0.153 | 0.0347 | 0.1144 | 0.114 | 37.16 | | 0.976 | 9.0 | 1359 | 0.9640 | 0.1473 | 0.0266 | 0.1214 | 0.1209 | 30.78 | | 0.6318 | 10.0 | 1510 | 0.9684 | 0.1687 | 0.0364 | 0.1366 | 0.1355 | 33.88 | | 0.6318 | 11.0 | 1661 | 0.9682 | 0.1856 | 0.0405 | 0.1448 | 0.143 | 32.34 | | 0.6318 | 12.0 | 1812 | 0.9893 | 0.1693 | 0.0288 | 0.1293 | 0.1283 | 45.33 | | 0.6318 | 13.0 | 1963 | 1.0004 | 0.1821 | 0.0464 | 0.1479 | 0.147 | 29.72 | | 0.3943 | 14.0 | 2114 | 1.0180 | 0.1659 | 0.0233 | 0.1306 | 0.1289 | 30.25 | | 0.3943 | 15.0 | 2265 | 1.0289 | 0.188 | 0.0402 | 0.1525 | 0.1519 | 36.03 | | 0.3943 | 16.0 | 2416 | 1.0434 | 0.1928 | 0.05 | 0.158 | 0.1575 | 35.88 | | 0.2174 | 17.0 | 2567 | 1.0522 | 0.1824 | 0.0453 | 0.1508 | 0.1491 | 35.06 | | 0.2174 | 18.0 | 2718 | 1.0749 | 0.1804 | 0.0447 | 0.1512 | 0.1498 | 35.17 | | 0.2174 | 19.0 | 2869 | 1.0854 | 0.1774 | 0.0417 | 0.1483 | 0.1456 | 36.54 | | 0.1076 | 20.0 | 3020 | 1.0986 | 0.1606 | 0.0389 | 0.1308 | 0.1309 | 37.39 | | 0.1076 | 21.0 | 3171 | 1.1028 | 0.1727 | 0.0445 | 0.1421 | 0.1408 | 40.47 | | 0.1076 | 22.0 | 3322 | 1.1114 | 0.1776 | 0.0401 | 0.1431 | 0.1402 | 52.41 | | 0.1076 | 23.0 | 3473 | 1.1294 | 0.1876 | 0.0374 | 0.1552 | 0.153 | 36.39 | | 0.0497 | 24.0 | 3624 | 1.1328 | 0.1862 | 0.0443 | 0.1551 | 0.1526 | 47.9 | | 0.0497 | 25.0 | 3775 | 1.1398 | 0.198 | 0.0513 | 0.1599 | 0.1577 | 52.27 | | 0.0497 | 26.0 | 3926 | 1.1626 | 0.1723 | 0.0306 | 0.1414 | 0.1397 | 38.1 | | 0.025 | 27.0 | 4077 | 1.1681 | 0.1867 | 0.0412 | 0.1547 | 0.152 | 37.74 | | 0.025 | 28.0 | 4228 | 1.1717 | 0.1454 | 0.0217 | 0.1103 | 0.1088 | 39.31 | | 0.025 | 29.0 | 4379 | 1.1802 | 0.1668 | 0.023 | 0.13 | 0.1285 | 44.39 | | 0.0155 | 30.0 | 4530 | 1.1900 | 0.1732 | 0.0335 | 0.1434 | 0.1419 | 41.42 | | 0.0155 | 31.0 | 4681 | 1.1900 | 0.1828 | 0.0411 | 0.1542 | 0.1527 | 39.23 | | 0.0155 | 32.0 | 4832 | 1.1996 | 0.1902 | 0.0346 | 0.1536 | 0.1523 | 44.7 | | 0.0155 | 33.0 | 4983 | 1.1976 | 0.1878 | 0.0511 | 0.1517 | 0.1509 | 40.07 | | 0.0103 | 34.0 | 5134 | 1.2044 | 0.1847 | 0.0402 | 0.1481 | 0.1478 | 47.21 | | 0.0103 | 35.0 | 5285 | 1.2020 | 0.1915 | 0.0351 | 0.1504 | 0.1489 | 56.49 | | 0.0103 | 36.0 | 5436 | 1.2098 | 0.1941 | 0.0385 | 0.1546 | 0.1535 | 48.35 | | 0.008 | 37.0 | 5587 | 1.2223 | 0.1842 | 0.0414 | 0.1499 | 0.1478 | 41.0 | | 0.008 | 38.0 | 5738 | 1.2183 | 0.1899 | 0.0349 | 0.1526 | 0.1516 | 41.29 | | 0.008 | 39.0 | 5889 | 1.2294 | 0.1928 | 0.0384 | 0.1578 | 0.1568 | 39.19 | | 0.0062 | 40.0 | 6040 | 1.2325 | 0.2012 | 0.046 | 0.1653 | 0.1637 | 42.46 | | 0.0062 | 41.0 | 6191 | 1.2352 | 0.1862 | 0.0326 | 0.1519 | 0.1496 | 39.36 | | 0.0062 | 42.0 | 6342 | 1.2343 | 0.1956 | 0.0369 | 0.1635 | 0.1618 | 41.77 | | 0.0062 | 43.0 | 6493 | 1.2371 | 0.1838 | 0.0299 | 0.1529 | 0.1521 | 42.28 | | 0.0049 | 44.0 | 6644 | 1.2351 | 0.191 | 0.0358 | 0.1574 | 0.1565 | 41.81 | | 0.0049 | 45.0 | 6795 | 1.2403 | 0.1898 | 0.035 | 0.1522 | 0.1505 | 41.59 | | 0.0049 | 46.0 | 6946 | 1.2394 | 0.2021 | 0.0465 | 0.1702 | 0.1686 | 43.89 | | 0.004 | 47.0 | 7097 | 1.2412 | 0.1981 | 0.0416 | 0.164 | 0.1632 | 41.01 | | 0.004 | 48.0 | 7248 | 1.2427 | 0.1765 | 0.0246 | 0.1447 | 0.1439 | 41.12 | | 0.004 | 49.0 | 7399 | 1.2427 | 0.1806 | 0.0279 | 0.1473 | 0.1461 | 42.85 | | 0.0036 | 50.0 | 7550 | 1.2443 | 0.1837 | 0.028 | 0.1489 | 0.1474 | 42.57 | ### Framework versions - Transformers 4.36.2 - Pytorch 1.12.1+cu113 - Datasets 2.16.1 - Tokenizers 0.15.0
mmpc/phi-2-text
mmpc
2024-01-26T17:46:10Z
33
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T17:40:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
alnrg2arg/test3_sft_lora
alnrg2arg
2024-01-26T17:46:07Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-01-26T17:46:06Z
--- library_name: transformers tags: - unsloth - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AItestaccount/LLMPromptGen-AI
AItestaccount
2024-01-26T17:38:08Z
5
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-01-26T17:30:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MaziyarPanahi/Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.1-GGUF
MaziyarPanahi
2024-01-26T17:29:34Z
25
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "maywell/Synatra-7B-v0.3-RP", "pytorch", "ko", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us", "license:apache-2.0", "base_model:MaziyarPanahi/Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.1", "conversational" ]
text-generation
2024-01-26T15:59:59Z
--- license: apache-2.0 tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - Safetensors - text-generation-inference - merge - 7b - mistralai/Mistral-7B-Instruct-v0.1 - maywell/Synatra-7B-v0.3-RP - pytorch - ko - license:cc-by-nc-4.0 - autotrain_compatible - endpoints_compatible - region:us - license:apache-2.0 model_name: Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.1) ## Description [MaziyarPanahi/Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.1). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.1-GGUF Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.1-GGUF Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Synatra-7B-v0.3-RP-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
vishesh-t27/22-Neuro_Model
vishesh-t27
2024-01-26T17:14:58Z
13
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "mlabonne/NeuralMarcoro14-7B", "Neuronovo/neuronovo-7B-v0.2", "base_model:Neuronovo/neuronovo-9B-v0.2", "base_model:merge:Neuronovo/neuronovo-9B-v0.2", "base_model:mlabonne/NeuralMarcoro14-7B", "base_model:merge:mlabonne/NeuralMarcoro14-7B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-10T16:48:47Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - mlabonne/NeuralMarcoro14-7B - Neuronovo/neuronovo-7B-v0.2 base_model: - mlabonne/NeuralMarcoro14-7B - Neuronovo/neuronovo-7B-v0.2 --- # 22-Neuro_Model 22-Neuro_Model is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [mlabonne/NeuralMarcoro14-7B](https://huggingface.co/mlabonne/NeuralMarcoro14-7B) * [Neuronovo/neuronovo-7B-v0.2](https://huggingface.co/Neuronovo/neuronovo-7B-v0.2) ## 🧩 Configuration ```yaml slices: - sources: - model: mlabonne/NeuralMarcoro14-7B layer_range: [0, 32] - model: Neuronovo/neuronovo-7B-v0.2 layer_range: [0, 32] merge_method: slerp base_model: OpenPipe/mistral-ft-optimized-1218 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "vishesht27/22-Neuro_Model" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
xezpeleta/latxa-7b-v1-gguf
xezpeleta
2024-01-26T17:11:37Z
14
1
null
[ "gguf", "latxa", "7b", "hitz", "llama", "text-generation", "eu", "base_model:HiTZ/latxa-7b-v1", "base_model:quantized:HiTZ/latxa-7b-v1", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T15:34:40Z
--- language: - eu pipeline_tag: text-generation tags: - gguf - latxa - 7b - hitz - llama model_name: latxa-7b-v1 base_model: HiTZ/latxa-7b-v1 --- # Latxa 7b GGUF ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [latxa-7b-v1.gguf](https://huggingface.co/xezpeleta/latxa-7b-v1-gguf/blob/main/latxa-7b-v1.gguf) | | | 26 GB| | | | [latxa-7b-v1-f16.gguf](https://huggingface.co/xezpeleta/latxa-7b-v1-gguf/blob/main/latxa-7b-v1-f16.gguf) | | | 13 GB| | | | [latxa-7b-v1-q8_0.gguf](https://huggingface.co/xezpeleta/latxa-7b-v1-gguf/blob/main/latxa-7b-v1-q8_0.gguf) | Q8_0 | | 6,7 GB| | |
flyman123/texi-model
flyman123
2024-01-26T17:10:09Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-26T17:10:04Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: texi-model results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.69 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="flyman123/texi-model", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
flyman123/q-FrozenLake-v1-4x4-noSlippery
flyman123
2024-01-26T17:08:13Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-26T17:08:11Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="flyman123/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
LoneStriker/NeuralMarcoro14-7B-6.0bpw-h6-exl2
LoneStriker
2024-01-26T17:07:05Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mlabonne/Marcoro14-7B-slerp", "dpo", "rlhf", "conversational", "dataset:mlabonne/chatml_dpo_pairs", "base_model:mlabonne/Marcoro14-7B-slerp", "base_model:finetune:mlabonne/Marcoro14-7B-slerp", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T17:04:44Z
--- base_model: mlabonne/Marcoro14-7B-slerp license: cc-by-nc-4.0 tags: - mlabonne/Marcoro14-7B-slerp - dpo - rlhf datasets: - mlabonne/chatml_dpo_pairs --- ![](https://i.imgur.com/CBen22L.jpg) # NeuralMarcoro14-7B This is a DPO fine-tuned version of [mlabonne/Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) using the [chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs) preference dataset. It improves the performance of the model on Nous benchmark suite and the Open LLM Benchmark. It is currently the best-performing 7B LLM on the Open LLM Leaderboard (08/01/24). You can try it out in this [Space](https://huggingface.co/spaces/mlabonne/NeuralMarcoro14-7B-GGUF-Chat) (GGUF Q4_K_M). ## ⚡ Quantized models * **GGUF**: https://huggingface.co/mlabonne/NeuralMarcoro14-7B-GGUF ## 🏆 Evaluation ### Open LLM Leaderboard ![](https://i.imgur.com/Int9P07.png) ![](https://i.imgur.com/70NXUKD.png) ### Nous | Model |AGIEval|GPT4ALL|TruthfulQA|Bigbench|Average| |-------------------------|------:|------:|---------:|-------:|------:| |[NeuralMarcoro14-7B](https://huggingface.co/mlabonne/NeuralMarcoro14-7B)| 44.59| 76.17| 65.94| 46.9| 58.4| |[Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) | 44.66| 76.24| 64.15| 45.64| 57.67| |Change | -0.07| -0.07| +1.79| +1.26| +0.73| ## 🧩 Training hyperparameters **LoRA**: * r=16 * lora_alpha=16 * lora_dropout=0.05 * bias="none" * task_type="CAUSAL_LM" * target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj'] **Training arguments**: * per_device_train_batch_size=4 * gradient_accumulation_steps=4 * gradient_checkpointing=True * learning_rate=5e-5 * lr_scheduler_type="cosine" * max_steps=200 * optim="paged_adamw_32bit" * warmup_steps=100 **DPOTrainer**: * beta=0.1 * max_prompt_length=1024 * max_length=1536 ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "mlabonne/NeuralMarcoro14-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
LoneStriker/NeuralMarcoro14-7B-5.0bpw-h6-exl2
LoneStriker
2024-01-26T17:02:28Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mlabonne/Marcoro14-7B-slerp", "dpo", "rlhf", "conversational", "dataset:mlabonne/chatml_dpo_pairs", "base_model:mlabonne/Marcoro14-7B-slerp", "base_model:finetune:mlabonne/Marcoro14-7B-slerp", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T17:00:25Z
--- base_model: mlabonne/Marcoro14-7B-slerp license: cc-by-nc-4.0 tags: - mlabonne/Marcoro14-7B-slerp - dpo - rlhf datasets: - mlabonne/chatml_dpo_pairs --- ![](https://i.imgur.com/CBen22L.jpg) # NeuralMarcoro14-7B This is a DPO fine-tuned version of [mlabonne/Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) using the [chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs) preference dataset. It improves the performance of the model on Nous benchmark suite and the Open LLM Benchmark. It is currently the best-performing 7B LLM on the Open LLM Leaderboard (08/01/24). You can try it out in this [Space](https://huggingface.co/spaces/mlabonne/NeuralMarcoro14-7B-GGUF-Chat) (GGUF Q4_K_M). ## ⚡ Quantized models * **GGUF**: https://huggingface.co/mlabonne/NeuralMarcoro14-7B-GGUF ## 🏆 Evaluation ### Open LLM Leaderboard ![](https://i.imgur.com/Int9P07.png) ![](https://i.imgur.com/70NXUKD.png) ### Nous | Model |AGIEval|GPT4ALL|TruthfulQA|Bigbench|Average| |-------------------------|------:|------:|---------:|-------:|------:| |[NeuralMarcoro14-7B](https://huggingface.co/mlabonne/NeuralMarcoro14-7B)| 44.59| 76.17| 65.94| 46.9| 58.4| |[Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) | 44.66| 76.24| 64.15| 45.64| 57.67| |Change | -0.07| -0.07| +1.79| +1.26| +0.73| ## 🧩 Training hyperparameters **LoRA**: * r=16 * lora_alpha=16 * lora_dropout=0.05 * bias="none" * task_type="CAUSAL_LM" * target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj'] **Training arguments**: * per_device_train_batch_size=4 * gradient_accumulation_steps=4 * gradient_checkpointing=True * learning_rate=5e-5 * lr_scheduler_type="cosine" * max_steps=200 * optim="paged_adamw_32bit" * warmup_steps=100 **DPOTrainer**: * beta=0.1 * max_prompt_length=1024 * max_length=1536 ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "mlabonne/NeuralMarcoro14-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
LoneStriker/NeuralMarcoro14-7B-3.0bpw-h6-exl2
LoneStriker
2024-01-26T16:52:36Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mlabonne/Marcoro14-7B-slerp", "dpo", "rlhf", "conversational", "dataset:mlabonne/chatml_dpo_pairs", "base_model:mlabonne/Marcoro14-7B-slerp", "base_model:finetune:mlabonne/Marcoro14-7B-slerp", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T16:51:16Z
--- base_model: mlabonne/Marcoro14-7B-slerp license: cc-by-nc-4.0 tags: - mlabonne/Marcoro14-7B-slerp - dpo - rlhf datasets: - mlabonne/chatml_dpo_pairs --- ![](https://i.imgur.com/CBen22L.jpg) # NeuralMarcoro14-7B This is a DPO fine-tuned version of [mlabonne/Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) using the [chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs) preference dataset. It improves the performance of the model on Nous benchmark suite and the Open LLM Benchmark. It is currently the best-performing 7B LLM on the Open LLM Leaderboard (08/01/24). You can try it out in this [Space](https://huggingface.co/spaces/mlabonne/NeuralMarcoro14-7B-GGUF-Chat) (GGUF Q4_K_M). ## ⚡ Quantized models * **GGUF**: https://huggingface.co/mlabonne/NeuralMarcoro14-7B-GGUF ## 🏆 Evaluation ### Open LLM Leaderboard ![](https://i.imgur.com/Int9P07.png) ![](https://i.imgur.com/70NXUKD.png) ### Nous | Model |AGIEval|GPT4ALL|TruthfulQA|Bigbench|Average| |-------------------------|------:|------:|---------:|-------:|------:| |[NeuralMarcoro14-7B](https://huggingface.co/mlabonne/NeuralMarcoro14-7B)| 44.59| 76.17| 65.94| 46.9| 58.4| |[Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) | 44.66| 76.24| 64.15| 45.64| 57.67| |Change | -0.07| -0.07| +1.79| +1.26| +0.73| ## 🧩 Training hyperparameters **LoRA**: * r=16 * lora_alpha=16 * lora_dropout=0.05 * bias="none" * task_type="CAUSAL_LM" * target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj'] **Training arguments**: * per_device_train_batch_size=4 * gradient_accumulation_steps=4 * gradient_checkpointing=True * learning_rate=5e-5 * lr_scheduler_type="cosine" * max_steps=200 * optim="paged_adamw_32bit" * warmup_steps=100 **DPOTrainer**: * beta=0.1 * max_prompt_length=1024 * max_length=1536 ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "mlabonne/NeuralMarcoro14-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
21world/deepseek-coder-7b-instruct-v1.5-GGUF
21world
2024-01-26T16:50:03Z
284
1
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-01-26T13:35:11Z
original model link https://huggingface.co/deepseek-ai/deepseek-coder-7b-instruct-v1.5
grrfdghebsz/FineTuning
grrfdghebsz
2024-01-26T16:42:23Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-26T11:24:59Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: FineTuning results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # FineTuning This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2930 - Jaccard (sample): [0.6076190476190476, 0.4382716049382716, 0.6174636174636174, 0.40869565217391307, 0.25, 0.2815884476534296, 0.20987654320987653, 0.0] - Jaccard (macro): 0.3517 - Macro-precision: 0.5303 - Macro-recall: 0.4556 - Macro F1: 0.4869 - Micro F1: 0.6352 - Accuracy: 0.3742 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Jaccard (sample) | Jaccard (macro) | Macro-precision | Macro-recall | Macro F1 | Micro F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------:|:---------------:|:------------:|:--------:|:--------:|:--------:| | 0.4975 | 1.0 | 339 | 0.3645 | [0.35964912280701755, 0.14566929133858267, 0.014742014742014743, 0.0, 0.0, 0.0, 0.0, 0.0] | 0.0650 | 0.3352 | 0.0688 | 0.1015 | 0.2109 | 0.145 | | 0.37 | 2.0 | 678 | 0.3212 | [0.5595238095238095, 0.41836734693877553, 0.519916142557652, 0.26013513513513514, 0.15384615384615385, 0.2743055555555556, 0.266304347826087, 0.0] | 0.3065 | 0.5170 | 0.4010 | 0.4403 | 0.5802 | 0.315 | | 0.3193 | 3.0 | 1017 | 0.3064 | [0.6198198198198198, 0.39100346020761245, 0.5995893223819302, 0.35597826086956524, 0.0, 0.08071748878923767, 0.032520325203252036, 0.0] | 0.2600 | 0.5439 | 0.3302 | 0.3518 | 0.5995 | 0.3592 | | 0.2894 | 4.0 | 1356 | 0.2954 | [0.5859375, 0.4208955223880597, 0.5944333996023857, 0.3930635838150289, 0.13333333333333333, 0.199203187250996, 0.21084337349397592, 0.0] | 0.3172 | 0.5068 | 0.4093 | 0.4446 | 0.6130 | 0.3667 | | 0.2679 | 5.0 | 1695 | 0.2927 | [0.5968688845401174, 0.42342342342342343, 0.6072186836518046, 0.39039039039039036, 0.23076923076923078, 0.2847222222222222, 0.20261437908496732, 0.0] | 0.3420 | 0.5541 | 0.4299 | 0.4769 | 0.6247 | 0.3758 | | 0.2519 | 6.0 | 2034 | 0.2930 | [0.6076190476190476, 0.4382716049382716, 0.6174636174636174, 0.40869565217391307, 0.25, 0.2815884476534296, 0.20987654320987653, 0.0] | 0.3517 | 0.5303 | 0.4556 | 0.4869 | 0.6352 | 0.3742 | ### Framework versions - Transformers 4.36.2 - Pytorch 1.13.0 - Datasets 2.16.1 - Tokenizers 0.15.0
raqdo09/clasificador-amazon_polarity
raqdo09
2024-01-26T16:41:29Z
93
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "classification", "generated_from_trainer", "base_model:ReynaQuita/twitter_disaster_distilbert", "base_model:finetune:ReynaQuita/twitter_disaster_distilbert", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-26T16:41:17Z
--- base_model: ReynaQuita/twitter_disaster_distilbert tags: - classification - generated_from_trainer model-index: - name: clasificador-amazon_polarity results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clasificador-amazon_polarity This model is a fine-tuned version of [ReynaQuita/twitter_disaster_distilbert](https://huggingface.co/ReynaQuita/twitter_disaster_distilbert) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
erdometo/xlm-roberta-base-finetuned-TQuad
erdometo
2024-01-26T16:25:29Z
29
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "question-answering", "tr", "dataset:erdometo/TQuad.0.1", "base_model:IProject-10/xlm-roberta-base-finetuned-squad2", "base_model:finetune:IProject-10/xlm-roberta-base-finetuned-squad2", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2024-01-26T13:50:19Z
--- license: mit base_model: IProject-10/xlm-roberta-base-finetuned-squad2 model-index: - name: xlm-roberta-base-finetuned-TQuad results: [] datasets: - erdometo/TQuad.0.1 language: - tr --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-TQuad This model is a fine-tuned version of [IProject-10/xlm-roberta-base-finetuned-squad2](https://huggingface.co/IProject-10/xlm-roberta-base-finetuned-squad2) on tquad.0.1 dataset. It achieves the following results on the evaluation set: - Loss: 1.2300 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.3241 | 1.0 | 520 | 1.2748 | | 0.9743 | 2.0 | 1040 | 1.2063 | | 0.7823 | 3.0 | 1560 | 1.2300 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
gilbertcarey/llama2-qlora-finetunined-french
gilbertcarey
2024-01-26T16:23:12Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-01-26T16:03:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TheBloke/EstopianMaid-13B-GGUF
TheBloke
2024-01-26T16:18:21Z
2,769
46
transformers
[ "transformers", "gguf", "llama", "roleplay", "text-generation-inference", "en", "base_model:KatyTheCutie/EstopianMaid-13B", "base_model:quantized:KatyTheCutie/EstopianMaid-13B", "license:apache-2.0", "region:us" ]
null
2024-01-26T15:56:26Z
--- base_model: KatyTheCutie/EstopianMaid-13B inference: false language: - en library_name: transformers license: apache-2.0 model_creator: Katy Vetteriano model_name: EstopianMaid 13B model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke tags: - roleplay - text-generation-inference --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # EstopianMaid 13B - GGUF - Model creator: [Katy Vetteriano](https://huggingface.co/KatyTheCutie) - Original model: [EstopianMaid 13B](https://huggingface.co/KatyTheCutie/EstopianMaid-13B) <!-- description start --> ## Description This repo contains GGUF format model files for [Katy Vetteriano's EstopianMaid 13B](https://huggingface.co/KatyTheCutie/EstopianMaid-13B). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/EstopianMaid-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/EstopianMaid-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/EstopianMaid-13B-GGUF) * [Katy Vetteriano's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/KatyTheCutie/EstopianMaid-13B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `apache-2.0`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Katy Vetteriano's EstopianMaid 13B](https://huggingface.co/KatyTheCutie/EstopianMaid-13B). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [estopianmaid-13b.Q2_K.gguf](https://huggingface.co/TheBloke/EstopianMaid-13B-GGUF/blob/main/estopianmaid-13b.Q2_K.gguf) | Q2_K | 2 | 4.85 GB| 7.35 GB | significant quality loss - not recommended for most purposes | | [estopianmaid-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/EstopianMaid-13B-GGUF/blob/main/estopianmaid-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [estopianmaid-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/EstopianMaid-13B-GGUF/blob/main/estopianmaid-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [estopianmaid-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/EstopianMaid-13B-GGUF/blob/main/estopianmaid-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [estopianmaid-13b.Q4_0.gguf](https://huggingface.co/TheBloke/EstopianMaid-13B-GGUF/blob/main/estopianmaid-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [estopianmaid-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/EstopianMaid-13B-GGUF/blob/main/estopianmaid-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.42 GB| 9.92 GB | small, greater quality loss | | [estopianmaid-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/EstopianMaid-13B-GGUF/blob/main/estopianmaid-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [estopianmaid-13b.Q5_0.gguf](https://huggingface.co/TheBloke/EstopianMaid-13B-GGUF/blob/main/estopianmaid-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [estopianmaid-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/EstopianMaid-13B-GGUF/blob/main/estopianmaid-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [estopianmaid-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/EstopianMaid-13B-GGUF/blob/main/estopianmaid-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [estopianmaid-13b.Q6_K.gguf](https://huggingface.co/TheBloke/EstopianMaid-13B-GGUF/blob/main/estopianmaid-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [estopianmaid-13b.Q8_0.gguf](https://huggingface.co/TheBloke/EstopianMaid-13B-GGUF/blob/main/estopianmaid-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/EstopianMaid-13B-GGUF and below it, a specific filename to download, such as: estopianmaid-13b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/EstopianMaid-13B-GGUF estopianmaid-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/EstopianMaid-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/EstopianMaid-13B-GGUF estopianmaid-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m estopianmaid-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./estopianmaid-13b.Q4_K_M.gguf", # Download the model file first n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./estopianmaid-13b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Katy Vetteriano's EstopianMaid 13B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/653a2392341143f7774424d8/fyK_RtEjb9sLF_Mq0nZm2.png) Based on feedback Estopian made can: - EstopianMaid is good at sticking to the character card. - maintains coherency in a setting with multiple characters. - Able to create new scenario's - Prompt Template: Alpaca ### Instruction: {prompt} ### Response: Recommended settings: - SillyTavern Default Preset. - Temperature: 0.7 - Min-P: 0.3 - Amount to Gen: 256 - Top P: 1 - Repetition penalty: 1.10 Models used: BlueNipples/TimeCrystal-l2-13B cgato/Thespis-13b-DPO-v0.7 KoboldAI/LLaMA2-13B-Estopia NeverSleep/Noromaid-13B-0.4-DPO Doctor-Shotgun/cat-v1.0-13b Feedback is always appreciated! Thank you KoboldAI for their usage of their MergeBox and Caitlyn G. for their support and feedback. <!-- original-model-card end -->
Devdeshitha/Mistral_7B_new
Devdeshitha
2024-01-26T16:09:56Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-01-26T15:17:13Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AIMH-DHgroup/llama-2-7b-chat-events
AIMH-DHgroup
2024-01-26T16:07:04Z
10
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "story maps", "events", "conversational", "dataset:AIMH-DHgroup/llama-2-7b-chat-events", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T15:40:52Z
--- license: apache-2.0 datasets: - AIMH-DHgroup/llama-2-7b-chat-events pipeline_tag: text-generation tags: - story maps - events ---
dsfsi/en-ss-m2m100-combo
dsfsi
2024-01-26T16:05:34Z
119
0
transformers
[ "transformers", "safetensors", "m2m_100", "text2text-generation", "siswati", "m2m100", "south-africa", "ss", "en", "dataset:dsfsi/vukuzenzele-sentence-aligned", "dataset:dsfsi/gov-za-monolingual", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-25T09:40:21Z
--- license: cc-by-4.0 datasets: - dsfsi/vukuzenzele-sentence-aligned - dsfsi/gov-za-monolingual language: - ss - en metrics: - bleu tags: - siswati - m2m100 - south-africa --- # [en-ss] English to Siswati Translation Model Based on M2M100 model using The ZA-gov-multilingual and Vuk'uzenzele South African multilingual corpora, and the SADiLaR Bilingual English-Siswati Corpus. Model created from English to Siswati aligned sentences from [ZA-gov-multilingual](), [Vuk'uzenzele](), and [SADiLaR]() ## Authors - Richard Lastrucci - Vukosi Marivate - [@vukosi](https://twitter.com/vukosi)
jlbaker361/dcgan-wikiart25
jlbaker361
2024-01-26T16:01:54Z
0
0
null
[ "region:us" ]
null
2024-01-25T03:57:26Z
--- {} --- Creative Adversarial Network epochs: 2 dataset jlbaker361/wikiart-balanced25 n classes 27 batch_size 4 images where resized to 512 and then center cropped to: 512 used clip=False discriminator parameters: init_dim: 32 final_dim 512 generator parameters: input noise_dim: 100
Heithem777/NeuralPipe-7B-slerp
Heithem777
2024-01-26T15:56:03Z
7
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "samir-fama/SamirGPT-v1", "abacusai/Slerp-CM-mist-dpo", "base_model:abacusai/Slerp-CM-mist-dpo", "base_model:merge:abacusai/Slerp-CM-mist-dpo", "base_model:samir-fama/SamirGPT-v1", "base_model:merge:samir-fama/SamirGPT-v1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-24T12:50:43Z
--- tags: - merge - mergekit - lazymergekit - samir-fama/SamirGPT-v1 - abacusai/Slerp-CM-mist-dpo base_model: - samir-fama/SamirGPT-v1 - abacusai/Slerp-CM-mist-dpo --- # NeuralPipe-7B-slerp NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [samir-fama/SamirGPT-v1](https://huggingface.co/samir-fama/SamirGPT-v1) * [abacusai/Slerp-CM-mist-dpo](https://huggingface.co/abacusai/Slerp-CM-mist-dpo) ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 # No parameters necessary for base model - model: samir-fama/SamirGPT-v1 parameters: density: 0.53 weight: 0.4 - model: abacusai/Slerp-CM-mist-dpo parameters: density: 0.53 weight: 0.3 merge_method: dare_ties base_model: mistralai/Mistral-7B-v0.1 parameters: int8_mask: true dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Heithem777/NeuralPipe-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
MaziyarPanahi/SciPhi-Mistral-7B-32k-Mistral-7B-Instruct-v0.1-GGUF
MaziyarPanahi
2024-01-26T15:53:44Z
39
1
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "SciPhi/SciPhi-Mistral-7B-32k", "pytorch", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us", "license:apache-2.0", "base_model:MaziyarPanahi/SciPhi-Mistral-7B-32k-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/SciPhi-Mistral-7B-32k-Mistral-7B-Instruct-v0.1", "conversational" ]
text-generation
2024-01-26T15:45:09Z
--- license: apache-2.0 tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - Safetensors - text-generation-inference - merge - 7b - mistralai/Mistral-7B-Instruct-v0.1 - SciPhi/SciPhi-Mistral-7B-32k - pytorch - license:mit - autotrain_compatible - endpoints_compatible - region:us - license:apache-2.0 model_name: SciPhi-Mistral-7B-32k-Mistral-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/SciPhi-Mistral-7B-32k-Mistral-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/SciPhi-Mistral-7B-32k-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/SciPhi-Mistral-7B-32k-Mistral-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/SciPhi-Mistral-7B-32k-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/SciPhi-Mistral-7B-32k-Mistral-7B-Instruct-v0.1) ## Description [MaziyarPanahi/SciPhi-Mistral-7B-32k-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/SciPhi-Mistral-7B-32k-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/SciPhi-Mistral-7B-32k-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/SciPhi-Mistral-7B-32k-Mistral-7B-Instruct-v0.1). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/SciPhi-Mistral-7B-32k-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/SciPhi-Mistral-7B-32k-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: SciPhi-Mistral-7B-32k-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/SciPhi-Mistral-7B-32k-Mistral-7B-Instruct-v0.1-GGUF SciPhi-Mistral-7B-32k-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/SciPhi-Mistral-7B-32k-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/SciPhi-Mistral-7B-32k-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/SciPhi-Mistral-7B-32k-Mistral-7B-Instruct-v0.1-GGUF SciPhi-Mistral-7B-32k-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m SciPhi-Mistral-7B-32k-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./SciPhi-Mistral-7B-32k-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./SciPhi-Mistral-7B-32k-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
NAQarabash/flan-t5-base-finetunes_QA_tr
NAQarabash
2024-01-26T15:53:14Z
90
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-base", "base_model:finetune:google/flan-t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-26T15:52:38Z
--- license: apache-2.0 base_model: google/flan-t5-base tags: - generated_from_trainer model-index: - name: flan-t5-base-finetunes_QA_tr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-base-finetunes_QA_tr This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
mlx-community/SOLAR-10.7b-Instruct-dpo-4bit-mlx
mlx-community
2024-01-26T15:51:34Z
2
0
transformers
[ "transformers", "llama", "text-generation", "mlx", "conversational", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T14:37:31Z
--- license: cc-by-nc-4.0 library_name: transformers tags: - mlx --- # mlx-community/SOLAR-10.7b-Instruct-dpo-4bit-mlx This model was converted to MLX format from [`macadeliccc/SOLAR-10.7b-Instruct-dpo`](). Refer to the [original model card](https://huggingface.co/macadeliccc/SOLAR-10.7b-Instruct-dpo) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/SOLAR-10.7b-Instruct-dpo-4bit-mlx") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
alnrg2arg/blockchainlabs_7B_merged_test2_4_sft_4bit_DPO_orca
alnrg2arg
2024-01-26T15:48:17Z
61
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "dataset:Intel/orca_dpo_pairs", "base_model:alnrg2arg/blockchainlabs_7B_merged_test2_4", "base_model:quantized:alnrg2arg/blockchainlabs_7B_merged_test2_4", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-01-24T06:47:22Z
--- language: - en license: cc-by-nc-4.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: alnrg2arg/blockchainlabs_7B_merged_test2_4 datasets: - Intel/orca_dpo_pairs --- This is a model from blockchainlab test 2.4 which are merged - alnrg2arg/blockchainlabs_7B_merged_test2_4. The project is running to make a small LLM for a on-device purpose. Overall pipeline for this iteration is 1.Merging to make a base model (7B) 2.Prune the model to reduce the parameter (50% sparcity) 3.For recovery phase of the pruning, the DPO is chosen. This model which is not pruned is intended to compare with the pruned model. This is the code and parameters I chose for this model(DPO). ``` from transformers import TrainingArguments, AutoModelForCausalLM from trl import DPOTrainer dpo_trainer = DPOTrainer( model = model, ref_model = None, args = TrainingArguments( per_device_train_batch_size = 8, gradient_accumulation_steps = 8, warmup_ratio = 0.1, num_train_epochs = 3, learning_rate = 5e-6, fp16 = not torch.cuda.is_bf16_supported(), bf16 = torch.cuda.is_bf16_supported(), logging_steps = 1, optim = "adamw_8bit", weight_decay = 0.0, lr_scheduler_type = "linear", seed = 42, output_dir = "output_DPO", ), beta = 0.1, train_dataset = dataset, # eval_dataset = raw_datasets["test"], tokenizer = tokenizer, max_length = 1024, max_prompt_length = 512, ) ``` The code and parameters are borrowed from https://colab.research.google.com/drive/1SKrKGV-BZoU4kv5q3g0jtE_OhRgPtrrQ?usp=sharing Benchmark scores | Tasks |Version|Filter|n-shot|Metric|Value | |Stderr| |----------|------:|------|-----:|------|-----:|---|-----:| |winogrande| 1|none | 5|acc |0.8248|± |0.0107|
gromdimon/beLLM
gromdimon
2024-01-26T15:41:05Z
0
0
null
[ "art", "bigram-language-model", "text-generation", "en", "be", "license:mit", "region:us" ]
text-generation
2024-01-26T13:33:52Z
--- license: mit language: - en - be inference: false tags: - art - bigram-language-model - text-generation --- # beLLM ## Model Description The beLLM or `belarusian Large Language Model (LLM)` is a pretrained generative language model for the Belarusian language. It is based on the previous work of [RuPoemGPT](https://github.com/gromdimon/ml-random/tree/master/rupoemgpt). The model was trained on a collection of belarusian poems and prose, which were collected from different sources. For more information about beLLM, please refer to [github-repo](https://github.com/gromdimon/beLLM). ### Intended Use This model is intended for natural language generation tasks, such as creative writing assistance or text completion. ### Limitations and Bias The model was trained just on 10mb of data, so it's very biased and very limited. ## Training and Evaluation Data The dataset was collected from different sources and manually preprocessed. It contains over 9.5 million characters and is available on the [github-repo](https://github.com/gromdimon/beLLM). The dataset includes the following sources: - [Belaruskaja Palichka](https://knihi.com/) - [Ejka](https://ejka.ru/) - [LitBel](https://lit-bel.org/) - [RuLit](https://www.rulit.me/) - [Stihi.by](https://stihi.by/) - [BelSputnik](https://bel.sputnik.by/) Some of the authors included in the dataset: - Maxim Tank (Максім Танк) - Yanka Kupala (Янка Купала) - Yakub Kolas (Якуб Колас) - Maxim Bogdanovich (Максім Багдановіч) - Vasyl Bykov (Васіль Быкаў) - Francishak Bagushevich (Францішак Багушэвіч) - Yanka Bryl (Янка Брыль) ### Training Procedure Hyperparameters for the training included: ``` # # Hyperparameters BATCH_SIZE = 32 # how many independent sequences will we process in parallel? BLOCK_SIZE = 256 # what is the maximum context length for predictions? MAX_ITERATIONS = 10000 EVALUATION_INTERVAL = 500 LEARNING_RATE = 4e-4 DEVICE = "cuda" if torch.cuda.is_available() else "cpu" EVALUATION_ITERATIONS = 200 NUMBER_OF_EMBEDDINGS = 512 NUMBER_OF_HEADS = 8 NUMBER_OF_LAYERS = 8 DROPOUT = 0.0 # ----------- ``` After every 2000 epochs the weights were saved. You can find them in this repo. Every model has the following semantics: "model_<number_of_epochs>". ### Evaluation Results Currently the latest `model_9999.pt` can make following generations: ``` Хапаць, дзе к попле можна Займаць зрабіць. Так маўчаў кашлянуць, зноў барадучыся словы, зноў трагічна і шум пачаў упалы, як дрыготкімі вушамі. Габрыня пацалавала Ганна лаючася: – Зноў не знаёмую, за штаб мне кашлянулася, што будзе член такі рэч, на колішняй Нёмане! Як трэба дагледзець кожным? Што з табой: вялікі год кашляніць будуць, колькі Яўхіма! Ну што ж, колькі хітры! І не горш за ўсіх! Хадзіць на вуліцы – нясіць ды, за важней! Заявіць – конь бароўскі, дахаты!.. Пад Куранятком! – Го-га, дзела хадзіць па хатах! – Яўхім свой, жвавы, запярэчыла Яўхіма. – Няма начы! Не трэба ведаць нікому! – неахвотна засмяялася за Зайчыка. – Пакуль не пішаш! На добры малы чалавек! Ніякі нячас, канешне, чакаў маладых панылы дыялектар, у Петрака, вячэрам, у турме які яго раней. «Э-е, аднак! Не, не ведаю, якая чаго гэта яна». — А ты, хлопец, кажа! Хлопчыкі, хлопчыкі! От хлопчыкі! — Гэта ўжо толькі добра ведаюць, што. Найшла сушчэня і на гарышчы месяцяцца ўволю, ці славакі турмаюць? — Пад бокам, — скамандаваў ката, — прадаваў Брык. Апошняя нібы набок ад яго ці здурнела, быццам адчуваючы сябе чаканне нешта сваім, хоць яна гаварыла. Дзёмчыхі неўпрыкмет пагорквалі з вачэй сетку. Ён магла дастаць з роспаччу астраўкаю трохпрыбы любіў адным ліхам, заслугачу было такое, што ж была пры сабе Лена такая грамада, якімі былі бліжэй да ўсіх магіл часам дабраўся. — А хіба ён жа смуглы? — спытала яна. — Выглядаў бы, каб аб нашым такім ваенным час ісці стаў і маладзіца не чапала. Толькі лапамі ўжо зусім недадзеленым быў незразумелы, але калі на Івана зноў кароценька прасіла. — Вось што, барыс падкінуў? — спытаўся нарэшце, як змоўкла з вераспіскай у кішэню, прыпаўшы: — Выходзіць яна ўжо няма для яе! Годзе за бацьку. Ідзіце, a людзі стараліся бацькамі. Высадзіце, што ўсе роўныя! Яна лёгенька штанула: Джулія дагоніць — барадаты ад шмат штабе чалавекі. Яна ісці маці не дагоніць, а яна не адчулася. Ён баяўся збірацца ў горад. Дзяўчыны яшчэ больш не былі, каб у печы, вядома, ніколі, яна не гаварыла. Ніколі ``` ## Usage For usage and other information, please refer to [github-repo](https://github.com/gromdimon/beLLM). ## Source and Contributions This model was developed by [Dzmitry Hramyka](https://github.com/gromdimon). Contributions and feedback are welcome.
wucng/res18_flower_c5
wucng
2024-01-26T15:36:25Z
309
0
transformers
[ "transformers", "tensorboard", "safetensors", "resnet", "image-classification", "generated_from_trainer", "custom_code", "base_model:wucng/custom-resnet18", "base_model:finetune:wucng/custom-resnet18", "autotrain_compatible", "region:us" ]
image-classification
2024-01-26T14:56:31Z
--- base_model: wucng/custom-resnet18 tags: - generated_from_trainer metrics: - accuracy model-index: - name: res18_flower_c5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # res18_flower_c5 This model is a fine-tuned version of [wucng/custom-resnet18](https://huggingface.co/wucng/custom-resnet18) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2866 - Accuracy: 0.8954 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4384 | 1.0 | 46 | 0.2866 | 0.8954 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.15.0
anilbhatt1/phi2-proj-peft-model
anilbhatt1
2024-01-26T15:19:52Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-01-26T02:27:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RJZauner/mistral-7b-esrs-e1-dolly
RJZauner
2024-01-26T15:06:27Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-01-26T15:05:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jeevana/group8qna_gpt2__26janV001
jeevana
2024-01-26T15:06:25Z
159
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T14:56:41Z
--- license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: group8qna_gpt2__26janV001 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # group8qna_gpt2__26janV001 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8110 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.9404 | 0.47 | 100 | 2.9545 | | 2.8649 | 0.93 | 200 | 2.8110 | ### Framework versions - Transformers 4.37.1 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
jeevana/G8_mistral7b_qlora_1211
jeevana
2024-01-26T14:59:50Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-01-26T12:39:00Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: mistralai/Mistral-7B-v0.1 model-index: - name: G8_mistral7b_qlora_1211 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # G8_mistral7b_qlora_1211 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1692 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1583 | 0.47 | 100 | 1.2112 | | 1.0407 | 0.93 | 200 | 1.1692 | ### Framework versions - PEFT 0.7.1 - Transformers 4.37.1 - Pytorch 2.0.0 - Datasets 2.16.0 - Tokenizers 0.15.0
magixn/q-FrozenLake-v1-4x4-noSlippery
magixn
2024-01-26T14:58:02Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-26T14:57:58Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="magixn/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
guirnd/Reinforce-CartPole-v1
guirnd
2024-01-26T14:55:57Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-01-26T14:55:48Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
y629/distilbert-base-uncased-finetuned-emotion
y629
2024-01-26T14:54:25Z
93
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-26T14:37:37Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.919 - name: F1 type: f1 value: 0.9192117062934999 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2241 - Accuracy: 0.919 - F1: 0.9192 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8491 | 1.0 | 250 | 0.3189 | 0.905 | 0.9024 | | 0.2564 | 2.0 | 500 | 0.2241 | 0.919 | 0.9192 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.13.1+cu117 - Datasets 2.13.2 - Tokenizers 0.13.3
colable/llama-ko-peft
colable
2024-01-26T14:52:50Z
58
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "ko", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T14:34:31Z
--- license: mit language: - ko --- # open-llama-2-ko based model with inhouse dataset This is an Korean Model based on * [beomi/open-llama-2-ko-7b] gpu code example ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM import math ## v2 models model_path = "colable/llama-ko-peft" tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True ) print(model) prompt = input("please input prompt:") while len(prompt) > 0: input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda") generation_output = model.generate( input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2 ) print(tokenizer.decode(generation_output[0])) prompt = input("please input prompt:") ```
UnderstandLing/llama-2-7b-chat-hi
UnderstandLing
2024-01-26T14:46:14Z
5
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:NousResearch/Llama-2-7b-chat-hf", "base_model:adapter:NousResearch/Llama-2-7b-chat-hf", "region:us" ]
null
2023-12-30T09:28:33Z
--- library_name: peft base_model: NousResearch/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.2