modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-23 12:32:37
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
571 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-23 12:31:07
card
stringlengths
11
1.01M
ycros/BagelMIsteryTour-v2-8x7B
ycros
2024-01-27T11:40:20Z
53
16
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora", "base_model:merge:Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora", "base_model:Sao10K/Sensualize-Mixtral-bf16", "base_model:merge:Sao10K/Sensualize-Mixtral-bf16", "base_model:jondurbin/bagel-dpo-8x7b-v0.2", "base_model:merge:jondurbin/bagel-dpo-8x7b-v0.2", "base_model:mistralai/Mixtral-8x7B-Instruct-v0.1", "base_model:merge:mistralai/Mixtral-8x7B-Instruct-v0.1", "base_model:mistralai/Mixtral-8x7B-v0.1", "base_model:merge:mistralai/Mixtral-8x7B-v0.1", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-19T04:48:19Z
--- base_model: - mistralai/Mixtral-8x7B-v0.1 - jondurbin/bagel-dpo-8x7b-v0.2 - Sao10K/Sensualize-Mixtral-bf16 - mistralai/Mixtral-8x7B-v0.1 - Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora - mistralai/Mixtral-8x7B-Instruct-v0.1 tags: - mergekit - merge license: cc-by-nc-4.0 --- # BagelMIsteryTour-v2-8x7B [GGUF versions here](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B-GGUF) [AWQ versions here](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B-AWQ) Bagel, Mixtral Instruct, with extra spices. Give it a taste. Works with Alpaca prompt formats, though the Mistral format should also work. ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63044fa07373aacccd8a7c53/lxNMzXo_dq_JCP9YyUyaw.jpeg) I started experimenting around seeing if I could improve or fix some of Bagel's problems. Totally inspired by seeing how well Doctor-Shotgun's Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss worked (which is a LimaRP tune on top of base Mixtral, and then merged with Mixtral Instruct) - I decided to try some merges of Bagel with Mixtral Instruct as a result. Somehow I ended up here, Bagel, Mixtral Instruct, a little bit of LimaRP, a little bit of Sao10K's Sensualize. So far in my testing it's working very well, and while it seems fairly unaligned on a lot of stuff, it's maybe a little too aligned on a few specific things (which I think comes from Sensualize) - so that's something to play with in the future, or maybe try to DPO out. I've been running (temp last) minP 0.1, dynatemp 0.5-4, rep pen 1.07, rep range 1024. I've been testing Alpaca style Instruction/Response, and Instruction/Input/Response and those seem to work well, I expect Mistral's prompt format would also work well. You may need to add a stopping string on "{{char}}:" for RPs because it can sometimes duplicate those out in responses and waffle on. Seems to hold up and not fall apart at long contexts like Bagel and some other Mixtral tunes seem to, definitely doesn't seem prone to loopyness either. Can be pushed into extravagant prose if the scene/setting calls for it. __Version 2:__ lowered the mix of Sensualize. This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) as a base. ### Models Merged The following models were included in the merge: * [jondurbin/bagel-dpo-8x7b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-8x7b-v0.2) * [Sao10K/Sensualize-Mixtral-bf16](https://huggingface.co/Sao10K/Sensualize-Mixtral-bf16) * [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) + [Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora](https://huggingface.co/Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora) * [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: mistralai/Mixtral-8x7B-v0.1 models: - model: mistralai/Mixtral-8x7B-v0.1+Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora parameters: density: 0.5 weight: 0.2 - model: Sao10K/Sensualize-Mixtral-bf16 parameters: density: 0.5 weight: 0.1 - model: mistralai/Mixtral-8x7B-Instruct-v0.1 parameters: density: 0.6 weight: 1.0 - model: jondurbin/bagel-dpo-8x7b-v0.2 parameters: density: 0.6 weight: 0.5 merge_method: dare_ties dtype: bfloat16 ```
achimvp/ppo-PyramidsRND
achimvp
2024-01-27T11:39:54Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2024-01-27T11:39:51Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: achimvp/ppo-PyramidsRND 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
CultriX/OmniTrixAI
CultriX
2024-01-27T11:39:47Z
51
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "mlabonne/NeuralBeagle14-7B", "FelixChao/WestSeverus-7B-DPO-v2", "CultriX/MergeTrix-7B-v2", "base_model:PetroGPT/WestSeverus-7B-DPO-v2", "base_model:merge:PetroGPT/WestSeverus-7B-DPO-v2", "base_model:mlabonne/NeuralBeagle14-7B", "base_model:merge:mlabonne/NeuralBeagle14-7B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-25T13:56:18Z
--- tags: - merge - mergekit - lazymergekit - mlabonne/NeuralBeagle14-7B - FelixChao/WestSeverus-7B-DPO-v2 - CultriX/MergeTrix-7B-v2 base_model: - mlabonne/NeuralBeagle14-7B - FelixChao/WestSeverus-7B-DPO-v2 - CultriX/MergeTrix-7B-v2 license: apache-2.0 --- # EDIT: Always check my space for the latest benchmark results for my models! * https://huggingface.co/spaces/CultriX/Yet_Another_LLM_Leaderboard # OmniTrixAI OmniTrixAI is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B) * [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2) * [CultriX/MergeTrix-7B-v2](https://huggingface.co/CultriX/MergeTrix-7B-v2) ## 🧩 Configuration ```yaml models: - model: senseable/WestLake-7B-v2 # No parameters necessary for base model - model: mlabonne/NeuralBeagle14-7B parameters: density: 0.65 weight: 0.40 - model: FelixChao/WestSeverus-7B-DPO-v2 parameters: density: 0.45 weight: 0.26 - model: CultriX/MergeTrix-7B-v2 parameters: density: 0.55 weight: 0.34 merge_method: dare_ties base_model: senseable/WestLake-7B-v2 parameters: int8_mask: true dtype: float16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "CultriX/OmniTrixAI" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
mzwing/bling-phi-2-v0-GGUF
mzwing
2024-01-27T11:25:19Z
31
0
null
[ "gguf", "base_model:llmware/bling-phi-2-v0", "base_model:quantized:llmware/bling-phi-2-v0", "license:apache-2.0", "region:us" ]
null
2024-01-27T09:25:18Z
--- base_model: llmware/bling-phi-2-v0 inference: false license: apache-2.0 model_creator: llmware model_name: bling phi 2 v0 model_type: phi prompt_template: > System: A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions. Human: {prompt} Assistant: quantized_by: mzwing --- # bling phi 2 v0 - GGUF - Model creator: [llmware](https://huggingface.co/llmware) - Original model: [bling phi 2 v0](https://huggingface.co/llmware/bling-phi-2-v0) <!-- description start --> ## Description This repo contains GGUF format model files for [llmware's bling phi 2 v0](https://huggingface.co/llmware/bling-phi-2-v0). These files were quantised using hardware kindly provided by [Google Colab](https://colab.research.google.com/)(Free CPU Machine). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mzwing/AI-related/blob/master/notebooks/bling-phi-2-v0-GGUF.ipynb) You can also check it out easily in [my GitHub repo](https://github.com/mzwing/AI-related/blob/master/notebooks/bling-phi-2-v0-GGUF.ipynb). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [Nitro](https://nitro.jan.ai/), a fast, lightweight 3mb inference server to supercharge apps with local AI, and OpenAI-compatible API server. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [2, 3, 4, 5, 6, 8, 16 and 32-bit GGUF models for CPU+GPU inference](https://huggingface.co/mzwing/bling-phi-2-v0-GGUF) * [llmware's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/llmware/bling-phi-2-v0) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: BLING ``` System: A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions. Human: {prompt} Assistant: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [bling-phi-2-v0.Q2_K.gguf](https://huggingface.co/mzwing/bling-phi-2-v0-GGUF/blob/main/bling-phi-2-v0.Q2_K.gguf) | Q2_K | 2 | 1.09 GB | untested yet | smallest, significant quality loss - not recommended for most purposes | | [bling-phi-2-v0.Q3_K_S.gguf](https://huggingface.co/mzwing/bling-phi-2-v0-GGUF/blob/main/bling-phi-2-v0.Q3_K_S.gguf) | Q3_K_S | 3 | 1.25 GB | untested yet | very small, high quality loss | | [bling-phi-2-v0.Q3_K_M.gguf](https://huggingface.co/mzwing/bling-phi-2-v0-GGUF/blob/main/bling-phi-2-v0.Q3_K_M.gguf) | Q3_K_M | 3 | 1.49 GB | untested yet | very small, high quality loss | | [bling-phi-2-v0.Q3_K_L.gguf](https://huggingface.co/mzwing/bling-phi-2-v0-GGUF/blob/main/bling-phi-2-v0.Q3_K_L.gguf) | Q3_K_L | 3 | 1.25 GB | untested yet | small, substantial quality loss | | [bling-phi-2-v0.Q4_0.gguf](https://huggingface.co/mzwing/bling-phi-2-v0-GGUF/blob/main/bling-phi-2-v0.Q4_0.gguf) | Q4_0 | 4 | 1.6 GB | untested yet | legacy; small, very high quality loss - prefer using Q3_K_M | | [bling-phi-2-v0.Q4_K_S.gguf](https://huggingface.co/mzwing/bling-phi-2-v0-GGUF/blob/main/bling-phi-2-v0.Q4_K_S.gguf) | Q4_K_S | 4 | 1.63 GB | untested yet | small, greater quality loss | | [bling-phi-2-v0.Q4_K_M.gguf](https://huggingface.co/mzwing/bling-phi-2-v0-GGUF/blob/main/bling-phi-2-v0.Q4_K_M.gguf) | Q4_K_M | 4 | 1.79 GB | untested yet | medium, balanced quality - recommended | | [bling-phi-2-v0.Q5_0.gguf](https://huggingface.co/mzwing/bling-phi-2-v0-GGUF/blob/main/bling-phi-2-v0.Q5_0.gguf) | Q5_0 | 5 | 1.93 GB | untested yet | legacy; medium, balanced quality - prefer using Q4_K_M | | [bling-phi-2-v0.Q5_K_S.gguf](https://huggingface.co/mzwing/bling-phi-2-v0-GGUF/blob/main/bling-phi-2-v0.Q5_K_S.gguf) | Q5_K_S | 5 | 1.93 GB | untested yet | large, low quality loss - recommended | | [bling-phi-2-v0.Q5_K_M.gguf](https://huggingface.co/mzwing/bling-phi-2-v0-GGUF/blob/main/bling-phi-2-v0.Q5_K_M.gguf) | Q5_K_M | 5 | 2.07 GB | untested yet | large, very low quality loss - recommended | | [bling-phi-2-v0.Q6_K.gguf](https://huggingface.co/mzwing/bling-phi-2-v0-GGUF/blob/main/bling-phi-2-v0.Q6_K.gguf) | Q6_K | 6 | 2.29 GB | untested yet | very large, extremely low quality loss | | [bling-phi-2-v0.Q8_0.gguf](https://huggingface.co/mzwing/bling-phi-2-v0-GGUF/blob/main/bling-phi-2-v0.Q8_0.gguf) | Q8_0 | 8 | 2.96 GB | untested yet | very large, extremely low quality loss - not recommended | | [bling-phi-2-v0.F16.gguf](https://huggingface.co/mzwing/bling-phi-2-v0-GGUF/blob/main/bling-phi-2-v0.F16.gguf) | F16 | 16 | 5.56 GB | untested yet | extremely large, extremely low quality loss - not recommended | | [bling-phi-2-v0.F32.gguf](https://huggingface.co/mzwing/bling-phi-2-v0-GGUF/blob/main/bling-phi-2-v0.F32.gguf) | F32 | 32 | 11.1 GB | untested yet | extremely large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: `mzwing/bling-phi-2-v0-GGUF`, and below it, a specific filename to download, such as: `bling-phi-2-v0.Q4_K_M.gguf`. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download mzwing/bling-phi-2-v0-GGUF bling-phi-2-v0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download mzwing/bling-phi-2-v0-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download mzwing/bling-phi-2-v0-GGUF bling-phi-2-v0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m bling-phi-2-v0.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "System: A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.\nHuman: {prompt}\nAssistant:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("mzwing/bling-phi-2-v0-GGUF", model_file="bling-phi-2-v0.Q4_K_M.gguf", model_type="phi", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Thanks, and how to contribute Thanks to [Google Colab](https://colab.research.google.com/)! All the quantised models in this repo are done on the awesome platform. Thanks a lot! Thanks to [llama.cpp](https://github.com/ggerganov/llama.cpp)! It inspired me to explore the inspiring AI field, thanks! Thanks to [TheBloke](https://huggingface.co/TheBloke)! Everything in this repo is a reference to him. You are welcome to create a **PullRequest**! Especially for the **RAM Usage**! <!-- footer end --> <!-- original-model-card start --> # Original model card: llmware's bling phi 2 v0 # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> bling-phi-2-v0 is part of the BLING ("Best Little Instruct No GPU Required ...") model series, RAG-instruct trained on top of a Microsoft Phi-2B base model. BLING models are fine-tuned with high-quality custom instruct datasets, designed for rapid prototyping in RAG scenarios. For models with comparable size and performance in RAG deployments, please see: [**bling-stable-lm-3b-4e1t-v0**](https://huggingface.co/llmware/bling-stable-lm-3b-4e1t-v0) [**bling-sheared-llama-2.7b-0.1**](https://huggingface.co/llmware/bling-sheared-llama-2.7b-0.1) [**bling-red-pajamas-3b-0.1**](https://huggingface.co/llmware/bling-red-pajamas-3b-0.1) ### Benchmark Tests Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester) Average of 2 Test Runs with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations. --**Accuracy Score**: **93.0** correct out of 100 --Not Found Classification: 95.0% --Boolean: 85.0% --Math/Logic: 82.5% --Complex Questions (1-5): 3 (Above Average - multiple-choice, causal) --Summarization Quality (1-5): 3 (Above Average) --Hallucinations: No hallucinations observed in test runs. For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo). ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** llmware - **Model type:** Phi-2B - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Finetuned from model:** Microsoft Phi-2B-Base ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> The intended use of BLING models is two-fold: 1. Provide high-quality RAG-Instruct models designed for fact-based, no "hallucination" question-answering in connection with an enterprise RAG workflow. 2. BLING models are fine-tuned on top of leading base foundation models, generally in the 1-3B+ range, and purposefully rolled-out across multiple base models to provide choices and "drop-in" replacements for RAG specific use cases. ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> BLING is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services, legal and regulatory industries with complex information sources. BLING models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms. ## How to Get Started with the Model The fastest way to get started with BLING is through direct import in transformers: from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("bling-phi-2-v0", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("bling-phi-2-v0", trust_remote_code=True) Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents. The dRAGon model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as: full_prompt = "<human>: " + my_prompt + "\n" + "<bot>:" The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts: 1. Text Passage Context, and 2. Specific question or instruction based on the text passage To get the best results, package "my_prompt" as follows: my_prompt = {{text_passage}} + "\n" + {{question/instruction}} If you are using a HuggingFace generation script: # prepare prompt packaging used in fine-tuning process new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:" inputs = tokenizer(new_prompt, return_tensors="pt") start_of_output = len(inputs.input_ids[0]) # temperature: set at 0.3 for consistency of output # max_new_tokens: set at 100 - may prematurely stop a few of the summaries outputs = model.generate( inputs.input_ids.to(device), eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.eos_token_id, do_sample=True, temperature=0.3, max_new_tokens=100, ) output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True) ## Model Card Contact Darren Oberst & llmware team <!-- original-model-card end -->
alexboot/FallenShadowV2
alexboot
2024-01-27T10:59:10Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2023-12-22T12:41:48Z
--- license: apache-2.0 --- the model has been trained with 5 hours of cleaned data. works pretty good for TTS aslong as your tts voice is high This is my first model so i hope its good. i think its good
LoneStriker/Fimbulvetr-10.7B-v1-5.0bpw-h6-exl2
LoneStriker
2024-01-27T10:52:55Z
6
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-15T15:42:34Z
--- license: cc-by-nc-4.0 language: - en --- My current low-budget daily driver, so far. Frostwindv2 + Sensualize v1.1 + ___ data on uncen Instruct Solar. This is meant to be a verbose, smart Roleplaying model. I think I captured those two parts this time. Well, for my own cards and scenarios anyway, it passed with flying colours. I recommend using min-p, I liked Universal-Light preset in SillyTavern. Experimental. *** ### Prompt Format: Alpaca ``` ### Instruction: <Prompt> ### Input: <Insert Context Here> ### Response: ```
MoulikBansal/fine-tuned-on-mcq-phi_1_5_new_version_1
MoulikBansal
2024-01-27T10:52:03Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-01-25T17:16:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
finalyear2023/rohit_Sharma
finalyear2023
2024-01-27T10:46:22Z
2
1
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-01-27T10:46:14Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of TOK rohit license: openrail++ --- # SDXL LoRA DreamBooth - finalyear2023/rohit_Sharma <Gallery /> ## Model description These are finalyear2023/rohit_Sharma LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of TOK rohit to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](finalyear2023/rohit_Sharma/tree/main) them in the Files & versions tab.
shareit/mistral-7b-instruct-v0.2-qlora-orcaplatypus
shareit
2024-01-27T10:36:46Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2", "region:us" ]
null
2024-01-27T10:23:57Z
--- library_name: peft base_model: mistralai/Mistral-7B-Instruct-v0.2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: QuantizationMethod.BITS_AND_BYTES - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.0.dev0
LoneStriker/Tess-10.7B-v1.5-8.0bpw-h8-exl2
LoneStriker
2024-01-27T10:31:04Z
3
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T10:26:33Z
--- license: apache-2.0 --- <br> ![Tesoro](https://huggingface.co/migtissera/Tess-10.7B-v1.5/resolve/main/Tess-v1-5.png) <br> Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-10.7B-v1.5 was trained on the SOLAR-10.7B base. # Prompt Format: ``` SYSTEM: <ANY SYSTEM CONTEXT> USER: ASSISTANT: ```
badokorach/bert-finetuned-260124
badokorach
2024-01-27T10:30:12Z
63
0
transformers
[ "transformers", "tf", "bert", "question-answering", "generated_from_keras_callback", "base_model:badokorach/bert-finetuned-210124", "base_model:finetune:badokorach/bert-finetuned-210124", "endpoints_compatible", "region:us" ]
question-answering
2024-01-26T19:32:40Z
--- base_model: badokorach/bert-finetuned-210124 tags: - generated_from_keras_callback model-index: - name: badokorach/bert-finetuned-260124 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # badokorach/bert-finetuned-260124 This model is a fine-tuned version of [badokorach/bert-finetuned-210124](https://huggingface.co/badokorach/bert-finetuned-210124) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0000 - Validation Loss: 0.0 - Epoch: 14 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 4560, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.02} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.0357 | 0.0 | 0 | | 0.0006 | 0.0 | 1 | | 0.0003 | 0.0 | 2 | | 0.0002 | 0.0 | 3 | | 0.0001 | 0.0 | 4 | | 0.0001 | 0.0 | 5 | | 0.0001 | 0.0 | 6 | | 0.0001 | 0.0 | 7 | | 0.0000 | 0.0 | 8 | | 0.0001 | 0.0 | 9 | | 0.0000 | 0.0 | 10 | | 0.0000 | 0.0 | 11 | | 0.0000 | 0.0 | 12 | | 0.0000 | 0.0 | 13 | | 0.0000 | 0.0 | 14 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.15.0 - Datasets 2.16.1 - Tokenizers 0.15.1
AmrutaMuthal/mero_controlnet
AmrutaMuthal
2024-01-27T10:29:45Z
4
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "controlnet", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-01-21T16:36:09Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - controlnet inference: true --- # controlnet-AmrutaMuthal/mero_controlnet These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning.
mangoo111/repo_name
mangoo111
2024-01-27T10:27:08Z
100
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "ko", "dataset:mangoo111/stt_datasets", "base_model:openai/whisper-base", "base_model:finetune:openai/whisper-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-01-27T06:06:03Z
--- language: - ko license: apache-2.0 base_model: openai/whisper-base tags: - hf-asr-leaderboard - generated_from_trainer datasets: - mangoo111/stt_datasets model-index: - name: AIHub_non-face-to-face-care_data_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # AIHub_non-face-to-face-care_data_model This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the AIHub_non-face-to-face-care_data dataset. It achieves the following results on the evaluation set: - Loss: 0.5251 - Cer: 91.7452 - Normalized Cer: 0.1147 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | Normalized Cer | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------------:| | 0.6825 | 2.5 | 1000 | 0.7206 | 125.5376 | 0.1569 | | 0.4637 | 5.0 | 2000 | 0.4858 | 96.4728 | 0.1206 | | 0.34 | 7.5 | 3000 | 0.4926 | 92.8792 | 0.1161 | | 0.2378 | 10.0 | 4000 | 0.5251 | 91.7452 | 0.1147 | ### Framework versions - Transformers 4.38.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
LoneStriker/Fimbulvetr-10.7B-v1-GGUF
LoneStriker
2024-01-27T10:26:44Z
475
6
null
[ "gguf", "en", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-01-27T09:36:35Z
--- license: cc-by-nc-4.0 language: - en --- My current low-budget daily driver, so far. Frostwindv2 + Sensualize v1.1 + ___ data on uncen Instruct Solar. This is meant to be a verbose, smart Roleplaying model. I think I captured those two parts this time. Well, for my own cards and scenarios anyway, it passed with flying colours. I recommend using min-p, I liked Universal-Light preset in SillyTavern. Experimental. *** ### Prompt Format: Alpaca ``` ### Instruction: <Prompt> ### Input: <Insert Context Here> ### Response: ```
LoneStriker/Tess-10.7B-v1.5-6.0bpw-h6-exl2
LoneStriker
2024-01-27T10:26:23Z
4
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T10:19:29Z
--- license: apache-2.0 --- <br> ![Tesoro](https://huggingface.co/migtissera/Tess-10.7B-v1.5/resolve/main/Tess-v1-5.png) <br> Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-10.7B-v1.5 was trained on the SOLAR-10.7B base. # Prompt Format: ``` SYSTEM: <ANY SYSTEM CONTEXT> USER: ASSISTANT: ```
Jayeshkumarjangir/falcon-2-7b-jayesh_5000-merged
Jayeshkumarjangir
2024-01-27T10:22:13Z
13
0
transformers
[ "transformers", "safetensors", "falcon", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T10:09:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LoneStriker/Tess-10.7B-v1.5-3.0bpw-h6-exl2
LoneStriker
2024-01-27T10:09:08Z
4
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T09:46:00Z
--- license: apache-2.0 --- <br> ![Tesoro](https://huggingface.co/migtissera/Tess-10.7B-v1.5/resolve/main/Tess-v1-5.png) <br> Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-10.7B-v1.5 was trained on the SOLAR-10.7B base. # Prompt Format: ``` SYSTEM: <ANY SYSTEM CONTEXT> USER: ASSISTANT: ```
vierlinglukas/rl_course_vizdoom_health_gathering_supreme
vierlinglukas
2024-01-27T10:07:23Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-27T10:07:15Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 9.86 +/- 4.25 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r vierlinglukas/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
devhyun88/hyun-mistral-7b-orca-platypus-refine
devhyun88
2024-01-27T10:01:33Z
56
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "ko", "license:cc-by-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T00:53:06Z
--- license: cc-by-sa-4.0 language: - ko --- we fine-tune this model based on mistral-7b-v0.1 # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("devhyun88/hyun-mistral-7b-orca-platypus-refine") model = AutoModelForCausalLM.from_pretrained("devhyun88/hyun-mistral-7b-orca-platypus-refine")
Shaleen123/medical-yi
Shaleen123
2024-01-27T09:57:24Z
72
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-01-27T09:55:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
farzintava/rl_course_vizdoom_health_gathering_supreme
farzintava
2024-01-27T09:38:24Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-27T09:38:18Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 8.75 +/- 3.41 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r farzintava/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
vierlinglukas/ppo_stickthing
vierlinglukas
2024-01-27T09:25:50Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2024-01-27T09:25:42Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -216.04 +/- 165.88 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'vierlinglukas/ppo_stickthing' 'batch_size': 512 'minibatch_size': 128} ```
yoshinori-sano/NeuralHermes-2.5-Mistral-7B
yoshinori-sano
2024-01-27T09:14:40Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T09:08:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
whywexist111/whywexist111-buterfiles-32
whywexist111
2024-01-27T09:05:22Z
44
1
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2024-01-27T08:33:28Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('whywexist111/whywexist111-buterfiles-32') image = pipeline().images[0] image ```
FounderOfHuggingface/gpt2-large_lora_r4_e2e_nlg_t42000_e5
FounderOfHuggingface
2024-01-27T09:02:42Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openai-community/gpt2-large", "base_model:adapter:openai-community/gpt2-large", "region:us" ]
null
2024-01-27T09:02:38Z
--- library_name: peft base_model: gpt2-large --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
LoneStriker/Etheria-55b-v0.1-6.0bpw-h6-exl2
LoneStriker
2024-01-27T08:57:20Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T08:39:33Z
--- base_model: [] tags: - mergekit - merge --- # Steelskull/Etheria-55b-v0.1 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/RAhrbktyyVQxOR1np-9L2.png) ## Merge Details An attempt to make a functional goliath style merge to create a [Etheria] 55b-200k with two yi-34b-200k models. due to the merge it 'theoretically' should have a context of 200k but I recommend starting at 32k and moveing up, as it is unknown (at this time) what the merge has done to the context length. This is a merge of both VerA and VerB of Etheria-55b (There numbers were surprisingly good), I then created a sacrificial 55B out of the most performant yi-34b-200k Model and performed a Dare_ties merge and equalize the model into its current state. ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using Merged-Etheria-55b as a base. ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: Merged-Etheria-55b models: - model: Sacr-Etheria-55b parameters: weight: [0.22, 0.113, 0.113, 0.113, 0.113, 0.113] density: 0.61 - model: Merged-Etheria-55b parameters: weight: [0.22, 0.113, 0.113, 0.113, 0.113, 0.113] density: 0.61 merge_method: dare_ties tokenizer_source: union parameters: int8_mask: true dtype: bfloat16 ```
mlx-community/Mixtral-8x7B-v0.1-hf-4bit-mlx
mlx-community
2024-01-27T08:52:21Z
17
5
mlx
[ "mlx", "mixtral", "moe", "fr", "it", "de", "es", "en", "license:apache-2.0", "region:us" ]
null
2024-01-18T08:27:28Z
--- language: - fr - it - de - es - en license: apache-2.0 tags: - moe - mlx --- # mlx-community/Mixtral-8x7B-v0.1-hf-4bit-mlx This model was converted to MLX format from [`mistralai/Mixtral-8x7B-v0.1`](). Refer to the [original model card](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Mixtral-8x7B-v0.1-hf-4bit-mlx") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
MaziyarPanahi/testllm-c2-Mistral-7B-Instruct-v0.1-GGUF
MaziyarPanahi
2024-01-27T08:50:06Z
40
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "Kiddyz/testllm-c2", "pytorch", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us", "base_model:MaziyarPanahi/testllm-c2-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/testllm-c2-Mistral-7B-Instruct-v0.1", "conversational" ]
text-generation
2024-01-27T08:41:25Z
--- license: apache-2.0 tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - Safetensors - text-generation-inference - merge - 7b - mistralai/Mistral-7B-Instruct-v0.1 - Kiddyz/testllm-c2 - pytorch - en - license:apache-2.0 - autotrain_compatible - endpoints_compatible - region:us model_name: testllm-c2-Mistral-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/testllm-c2-Mistral-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/testllm-c2-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/testllm-c2-Mistral-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/testllm-c2-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/testllm-c2-Mistral-7B-Instruct-v0.1) ## Description [MaziyarPanahi/testllm-c2-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/testllm-c2-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/testllm-c2-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/testllm-c2-Mistral-7B-Instruct-v0.1). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/testllm-c2-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/testllm-c2-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: testllm-c2-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/testllm-c2-Mistral-7B-Instruct-v0.1-GGUF testllm-c2-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/testllm-c2-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/testllm-c2-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/testllm-c2-Mistral-7B-Instruct-v0.1-GGUF testllm-c2-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m testllm-c2-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./testllm-c2-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./testllm-c2-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF
MaziyarPanahi
2024-01-27T08:34:22Z
40
1
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "OpenBuddy/openbuddy-mistral-7b-v13.1", "pytorch", "zh", "en", "fr", "de", "ja", "ko", "it", "ru", "license:apache-2.0", "autotrain_compatible", "region:us", "endpoints_compatible", "base_model:MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1", "conversational" ]
text-generation
2024-01-27T08:25:19Z
--- license: apache-2.0 tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - Safetensors - text-generation-inference - merge - 7b - mistralai/Mistral-7B-Instruct-v0.1 - OpenBuddy/openbuddy-mistral-7b-v13.1 - pytorch - zh - en - fr - de - ja - ko - it - ru - license:apache-2.0 - autotrain_compatible - region:us - endpoints_compatible model_name: openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1) ## Description [MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./openbuddy-mistral-7b-v13.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
tada20001/prompt-tuning-patent-noise-classification
tada20001
2024-01-27T08:34:11Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:bigscience/bloomz-560m", "base_model:adapter:bigscience/bloomz-560m", "region:us" ]
null
2024-01-27T08:33:54Z
--- library_name: peft base_model: bigscience/bloomz-560m --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
satpalsr/l2_full
satpalsr
2024-01-27T08:31:48Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T07:40:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LoneStriker/Etheria-55b-v0.1-4.65bpw-h6-exl2
LoneStriker
2024-01-27T08:24:19Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T08:10:13Z
--- base_model: [] tags: - mergekit - merge --- # Steelskull/Etheria-55b-v0.1 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/RAhrbktyyVQxOR1np-9L2.png) ## Merge Details An attempt to make a functional goliath style merge to create a [Etheria] 55b-200k with two yi-34b-200k models. due to the merge it 'theoretically' should have a context of 200k but I recommend starting at 32k and moveing up, as it is unknown (at this time) what the merge has done to the context length. This is a merge of both VerA and VerB of Etheria-55b (There numbers were surprisingly good), I then created a sacrificial 55B out of the most performant yi-34b-200k Model and performed a Dare_ties merge and equalize the model into its current state. ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using Merged-Etheria-55b as a base. ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: Merged-Etheria-55b models: - model: Sacr-Etheria-55b parameters: weight: [0.22, 0.113, 0.113, 0.113, 0.113, 0.113] density: 0.61 - model: Merged-Etheria-55b parameters: weight: [0.22, 0.113, 0.113, 0.113, 0.113, 0.113] density: 0.61 merge_method: dare_ties tokenizer_source: union parameters: int8_mask: true dtype: bfloat16 ```
farzintava/dqn-SpaceInvadersNoFrameskip-v4
farzintava
2024-01-27T08:22:35Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-27T08:22:00Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 572.50 +/- 116.73 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga farzintava -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga farzintava -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga farzintava ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
pszemraj/gpt2-medium-halved
pszemraj
2024-01-27T08:13:59Z
135
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "en", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T13:46:27Z
--- library_name: transformers license: mit language: - en inference: parameters: do_sample: True epsilon_cutoff: 0.0001 repetition_penalty: 1.1 no_repeat_ngram_size: 5 --- # Model Card for Model ID Alright, it's _slightly less_ than half of the original layers from https://hf.co/openai-community/gpt2-medium Refer to original model card for all details ---
fatemehsaveh/depression_tweet
fatemehsaveh
2024-01-27T08:11:30Z
166
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:rafalposwiata/deproberta-large-v1", "base_model:finetune:rafalposwiata/deproberta-large-v1", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-22T12:37:49Z
--- base_model: rafalposwiata/deproberta-large-v1 tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: depression_tweet results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # depression_tweet This model is a fine-tuned version of [rafalposwiata/deproberta-large-v1](https://huggingface.co/rafalposwiata/deproberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0646 - Accuracy: 0.9836 - Precision: 0.9656 - Recall: 0.9977 - F1: 0.9814 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | No log | 0.2 | 50 | 0.1556 | 0.9684 | 0.9508 | 0.9777 | 0.9641 | | No log | 0.4 | 100 | 0.1399 | 0.9646 | 0.9354 | 0.9865 | 0.9603 | | No log | 0.61 | 150 | 0.1118 | 0.9631 | 0.9279 | 0.9920 | 0.9589 | | No log | 0.81 | 200 | 0.1090 | 0.9659 | 0.9333 | 0.9922 | 0.9619 | | No log | 1.01 | 250 | 0.0819 | 0.9759 | 0.9556 | 0.9905 | 0.9727 | | No log | 1.21 | 300 | 0.0548 | 0.9831 | 0.9831 | 0.9777 | 0.9804 | | No log | 1.42 | 350 | 0.1162 | 0.9587 | 0.9435 | 0.9624 | 0.9529 | | No log | 1.62 | 400 | 0.1167 | 0.9657 | 0.9303 | 0.9955 | 0.9618 | | No log | 1.82 | 450 | 0.0859 | 0.9776 | 0.9549 | 0.9955 | 0.9747 | | 0.0575 | 2.02 | 500 | 0.0564 | 0.9848 | 0.9707 | 0.9950 | 0.9827 | | 0.0575 | 2.23 | 550 | 0.0591 | 0.9839 | 0.9693 | 0.9945 | 0.9817 | | 0.0575 | 2.43 | 600 | 0.0913 | 0.9814 | 0.9623 | 0.9962 | 0.9790 | | 0.0575 | 2.63 | 650 | 0.0633 | 0.9847 | 0.9686 | 0.9970 | 0.9826 | | 0.0575 | 2.83 | 700 | 0.1171 | 0.9762 | 0.9493 | 0.9985 | 0.9733 | | 0.0575 | 3.04 | 750 | 0.0646 | 0.9836 | 0.9656 | 0.9977 | 0.9814 | ### Framework versions - Transformers 4.37.1 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.1
jeevana/group8qna_gpt2__27janV001
jeevana
2024-01-27T07:59:15Z
193
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T07:53:26Z
--- license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: group8qna_gpt2__27janV001 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # group8qna_gpt2__27janV001 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9729 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9878 | 0.47 | 100 | 2.1829 | | 1.9811 | 0.93 | 200 | 2.0764 | | 1.4933 | 1.4 | 300 | 2.0009 | | 1.3546 | 1.87 | 400 | 1.9729 | ### Framework versions - Transformers 4.37.1 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
GGital/vit-Covid
GGital
2024-01-27T07:44:59Z
178
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-01-27T07:02:05Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit-Covid results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9847036328871893 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-Covid This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0805 - Accuracy: 0.9847 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1283 | 0.38 | 100 | 0.1878 | 0.9484 | | 0.0312 | 0.76 | 200 | 0.1484 | 0.9560 | | 0.0655 | 1.15 | 300 | 0.0976 | 0.9713 | | 0.0587 | 1.53 | 400 | 0.0887 | 0.9713 | | 0.0106 | 1.91 | 500 | 0.0980 | 0.9732 | | 0.0137 | 2.29 | 600 | 0.1479 | 0.9618 | | 0.07 | 2.67 | 700 | 0.0882 | 0.9751 | | 0.0068 | 3.05 | 800 | 0.1160 | 0.9675 | | 0.0321 | 3.44 | 900 | 0.0872 | 0.9694 | | 0.0027 | 3.82 | 1000 | 0.0790 | 0.9809 | | 0.0041 | 4.2 | 1100 | 0.1029 | 0.9713 | | 0.0014 | 4.58 | 1200 | 0.0947 | 0.9809 | | 0.0018 | 4.96 | 1300 | 0.1399 | 0.9713 | | 0.001 | 5.34 | 1400 | 0.0689 | 0.9847 | | 0.001 | 5.73 | 1500 | 0.0852 | 0.9790 | | 0.0008 | 6.11 | 1600 | 0.1111 | 0.9790 | | 0.0013 | 6.49 | 1700 | 0.0695 | 0.9866 | | 0.0049 | 6.87 | 1800 | 0.0728 | 0.9885 | | 0.0007 | 7.25 | 1900 | 0.0963 | 0.9790 | | 0.0012 | 7.63 | 2000 | 0.0886 | 0.9847 | | 0.0006 | 8.02 | 2100 | 0.0811 | 0.9847 | | 0.0015 | 8.4 | 2200 | 0.0796 | 0.9847 | | 0.0143 | 8.78 | 2300 | 0.0804 | 0.9847 | | 0.0005 | 9.16 | 2400 | 0.0816 | 0.9847 | | 0.0006 | 9.54 | 2500 | 0.0811 | 0.9847 | | 0.0005 | 9.92 | 2600 | 0.0805 | 0.9847 | ### Framework versions - Transformers 4.38.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
youndukn/zephyr-7b-sft-qlora-8bit-adapter
youndukn
2024-01-27T07:43:31Z
3
0
peft
[ "peft", "tensorboard", "safetensors", "arxiv:1910.09700", "base_model:Gryphe/MythoMax-L2-13b", "base_model:adapter:Gryphe/MythoMax-L2-13b", "region:us" ]
null
2024-01-26T04:15:33Z
--- library_name: peft base_model: Gryphe/MythoMax-L2-13b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
Weni/WeniGPT-2.2.3-Zephyr-7B-LLM_Base_2.0.3_SFT
Weni
2024-01-27T07:42:55Z
2
0
peft
[ "peft", "safetensors", "mistral", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:HuggingFaceH4/zephyr-7b-beta", "base_model:adapter:HuggingFaceH4/zephyr-7b-beta", "license:mit", "region:us" ]
null
2024-01-26T12:45:21Z
--- license: mit library_name: peft tags: - trl - sft - generated_from_trainer datasets: - generator base_model: HuggingFaceH4/zephyr-7b-beta model-index: - name: WeniGPT-2.2.3-Zephyr-7B-LLM_Base_2.0.3_SFT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # WeniGPT-2.2.3-Zephyr-7B-LLM_Base_2.0.3_SFT This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 0.4088 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.03 - training_steps: 337 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5761 | 0.83 | 50 | 0.4671 | | 0.4488 | 1.66 | 100 | 0.4409 | | 0.4272 | 2.49 | 150 | 0.4293 | | 0.4145 | 3.32 | 200 | 0.4219 | | 0.3999 | 4.15 | 250 | 0.4155 | | 0.3872 | 4.98 | 300 | 0.4088 | ### Framework versions - PEFT 0.7.1 - Transformers 4.37.0.dev0 - Pytorch 2.1.0+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
octnn/Taxi-v3
octnn
2024-01-27T07:38:06Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-27T07:38:04Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.67 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="octnn/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
jtatman/Dr-Samantha-Philosopher-7B-slerp
jtatman
2024-01-27T07:35:48Z
6
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "mlabonne/NeuralBeagle14-7B", "cognitivecomputations/samantha-1.2-mistral-7b", "custom_code", "base_model:cognitivecomputations/samantha-1.2-mistral-7b", "base_model:merge:cognitivecomputations/samantha-1.2-mistral-7b", "base_model:mlabonne/NeuralBeagle14-7B", "base_model:merge:mlabonne/NeuralBeagle14-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T22:18:28Z
--- tags: - merge - mergekit - lazymergekit - mlabonne/NeuralBeagle14-7B - cognitivecomputations/samantha-1.2-mistral-7b base_model: - mlabonne/NeuralBeagle14-7B - cognitivecomputations/samantha-1.2-mistral-7b --- # Dr-Samantha-Philosopher-7B-slerp Dr-Samantha-Philosopher-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B) * [cognitivecomputations/samantha-1.2-mistral-7b](https://huggingface.co/cognitivecomputations/samantha-1.2-mistral-7b) ## 🧩 Configuration ```yaml slices: - sources: - model: mlabonne/NeuralBeagle14-7B layer_range: [0, 32] - model: cognitivecomputations/samantha-1.2-mistral-7b layer_range: [0, 32] merge_method: slerp base_model: cognitivecomputations/samantha-1.2-mistral-7b parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 # fallback for rest of tensors tokenizer_source: union dtype: float16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "jtatman/Dr-Samantha-Philosopher-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
lujiashuai/marian-finetuned-kde4-en-to-zh-json
lujiashuai
2024-01-27T07:33:19Z
125
0
transformers
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "translation", "generated_from_trainer", "base_model:Helsinki-NLP/opus-mt-en-zh", "base_model:finetune:Helsinki-NLP/opus-mt-en-zh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-01-27T07:14:39Z
--- license: apache-2.0 base_model: Helsinki-NLP/opus-mt-en-zh tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: marian-finetuned-kde4-en-to-zh-json results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-zh-json This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0213 - Bleu: 96.7544 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.0+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
ntc-ai/SDXL-LoRA-slider.cinematic-lighting-with-moody-ambiance
ntc-ai
2024-01-27T07:29:05Z
28
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
text-to-image
2024-01-27T07:29:02Z
--- language: - en thumbnail: "images/evaluate/cinematic lighting with moody ambiance...poorly lit/cinematic lighting with moody ambiance_17_3.0.png" widget: - text: cinematic lighting with moody ambiance output: url: images/cinematic lighting with moody ambiance_17_3.0.png - text: cinematic lighting with moody ambiance output: url: images/cinematic lighting with moody ambiance_19_3.0.png - text: cinematic lighting with moody ambiance output: url: images/cinematic lighting with moody ambiance_20_3.0.png - text: cinematic lighting with moody ambiance output: url: images/cinematic lighting with moody ambiance_21_3.0.png - text: cinematic lighting with moody ambiance output: url: images/cinematic lighting with moody ambiance_22_3.0.png tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers license: "mit" inference: false instance_prompt: "cinematic lighting with moody ambiance" base_model: "stabilityai/stable-diffusion-xl-base-1.0" --- # ntcai.xyz slider - cinematic lighting with moody ambiance (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/cinematic lighting with moody ambiance_17_-3.0.png" width=256 height=256 /> | <img src="images/cinematic lighting with moody ambiance_17_0.0.png" width=256 height=256 /> | <img src="images/cinematic lighting with moody ambiance_17_3.0.png" width=256 height=256 /> | | <img src="images/cinematic lighting with moody ambiance_19_-3.0.png" width=256 height=256 /> | <img src="images/cinematic lighting with moody ambiance_19_0.0.png" width=256 height=256 /> | <img src="images/cinematic lighting with moody ambiance_19_3.0.png" width=256 height=256 /> | | <img src="images/cinematic lighting with moody ambiance_20_-3.0.png" width=256 height=256 /> | <img src="images/cinematic lighting with moody ambiance_20_0.0.png" width=256 height=256 /> | <img src="images/cinematic lighting with moody ambiance_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` cinematic lighting with moody ambiance ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.cinematic-lighting-with-moody-ambiance', weight_name='cinematic lighting with moody ambiance.safetensors', adapter_name="cinematic lighting with moody ambiance") # Activate the LoRA pipe.set_adapters(["cinematic lighting with moody ambiance"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, cinematic lighting with moody ambiance" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
zooknowsys/humanizeLoRA_0127
zooknowsys
2024-01-27T07:28:07Z
2
0
peft
[ "peft", "arxiv:1910.09700", "base_model:Qwen/Qwen-VL-Chat", "base_model:adapter:Qwen/Qwen-VL-Chat", "region:us" ]
null
2024-01-27T07:27:32Z
--- library_name: peft base_model: Qwen/Qwen-VL-Chat --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
farzintava/Reinforce-cartpole-v1
farzintava
2024-01-27T07:28:00Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-01-27T07:27:51Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-cartpole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
AustinMcMike/mistral-7b-ft-test
AustinMcMike
2024-01-27T07:23:29Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-01-27T07:23:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
octnn/q-FrozenLake-v1-4x4-noSlippery
octnn
2024-01-27T07:20:57Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-27T07:20:54Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="octnn/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Benzaminnie/distilbert-base-uncased-finetuned-emotion
Benzaminnie
2024-01-27T07:10:58Z
92
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-27T06:57:01Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9215 - name: F1 type: f1 value: 0.9215027409425609 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2220 - Accuracy: 0.9215 - F1: 0.9215 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.83 | 1.0 | 250 | 0.3231 | 0.904 | 0.9029 | | 0.2532 | 2.0 | 500 | 0.2220 | 0.9215 | 0.9215 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
gaurav-mac/hindi-sensim-sbert-usingsumodataset-basel3cubepune
gaurav-mac
2024-01-27T07:09:06Z
21
0
sentence-transformers
[ "sentence-transformers", "pytorch", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-12-28T17:49:21Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 80 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5} ``` Parameters of the fit()-Method: ``` { "epochs": 15, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 1e-06 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1200, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
geektech/BERTopic-Israel-Palestine-Title3
geektech
2024-01-27T07:07:28Z
4
0
bertopic
[ "bertopic", "text-classification", "region:us" ]
text-classification
2024-01-27T07:07:27Z
--- tags: - bertopic library_name: bertopic pipeline_tag: text-classification --- # BERTopic-Israel-Palestine-Title3 This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model. BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets. ## Usage To use this model, please install BERTopic: ``` pip install -U bertopic ``` You can use the model as follows: ```python from bertopic import BERTopic topic_model = BERTopic.load("geektech/BERTopic-Israel-Palestine-Title3") topic_model.get_topic_info() ``` ## Topic overview * Number of topics: 150 * Number of training documents: 8456 <details> <summary>Click here for an overview of all topics.</summary> | Topic ID | Topic Keywords | Topic Frequency | Label | |----------|----------------|-----------------|-------| | -1 | israel - humanity - shorts - gaza - palestine | 10 | -1_israel_humanity_shorts_gaza | | 0 | support - palestine support - support palestine - palestine support palestine - palestine | 2831 | 0_support_palestine support_support palestine_palestine support palestine | | 1 | celebrities - celebrities who - who - who support - celebrities who support | 239 | 1_celebrities_celebrities who_who_who support | | 2 | pray for - pray - pray for palestine - for palestine - for | 204 | 2_pray for_pray_pray for palestine_for palestine | | 3 | strikes - air - air strikes - israeli air - israeli air strikes | 197 | 3_strikes_air_air strikes_israeli air | | 4 | prayforpalestine - prayforpalestine palestine - palestine prayforpalestine - savepalestine - freepalestine | 190 | 4_prayforpalestine_prayforpalestine palestine_palestine prayforpalestine_savepalestine | | 5 | hostage - israeli hostage - hostages - released - israeli | 145 | 5_hostage_israeli hostage_hostages_released | | 6 | vs - israel vs - vs israel - vs palestine - palestine vs | 133 | 6_vs_israel vs_vs israel_vs palestine | | 7 | humanitarian - crisis - humanitarian crisis - in gaza - crisis in gaza | 129 | 7_humanitarian_crisis_humanitarian crisis_in gaza | | 8 | இஸ - israel war - umapathy israel - umapathy - israel | 124 | 8_இஸ_israel war_umapathy israel_umapathy | | 9 | freedom fighters - fighters - freedom - palestine freedom - palestine freedom fighters | 121 | 9_freedom fighters_fighters_freedom_palestine freedom | | 10 | warzone - israel warzone - warzone palestine - israel - israel warzone palestine | 107 | 10_warzone_israel warzone_warzone palestine_israel | | 11 | save gaza - save - palestine save gaza - gaza palestine - save gaza palestine | 98 | 11_save gaza_save_palestine save gaza_gaza palestine | | 12 | gaza humanity - allah islam - humanity sad allah islam - sad allah islam - sad allah | 83 | 12_gaza humanity_allah islam_humanity sad allah islam_sad allah islam | | 13 | hamas - hamas war - israel hamas war - israel hamas - war | 79 | 13_hamas_hamas war_israel hamas war_israel hamas | | 14 | protest - children - freepalaestine - savepalestine - pray | 77 | 14_protest_children_freepalaestine_savepalestine | | 15 | beautiful - military - girls - beautiful israel - israel military | 77 | 15_beautiful_military_girls_beautiful israel | | 16 | truce - hamas truce - israel hamas truce - israel hamas - hamas | 76 | 16_truce_hamas truce_israel hamas truce_israel hamas | | 17 | al aqsa - al aqsa mosque - aqsa mosque - aqsa - mosque | 65 | 17_al aqsa_al aqsa mosque_aqsa mosque_aqsa | | 18 | west bank - bank - west - in west bank - in west | 63 | 18_west bank_bank_west_in west bank | | 19 | shorts - trendingshorts freepalestine shorts - trendingshorts freepalestine - trendingshorts - cover | 62 | 19_shorts_trendingshorts freepalestine shorts_trendingshorts freepalestine_trendingshorts | | 20 | free palestine - free - palestine freepalestine - freepalestine - free palestine freepalestine | 62 | 20_free palestine_free_palestine freepalestine_freepalestine | | 21 | video - footage - shows - releases - drone | 62 | 21_video_footage_shows_releases | | 22 | dance - beautiful - military - usa gaza - israel military | 61 | 22_dance_beautiful_military_usa gaza | | 23 | free - humanity - free palestine - freedom - freepalestine humanity | 60 | 23_free_humanity_free palestine_freedom | | 24 | election - freegaza - gaza election - bts - freepalestine gaza | 58 | 24_election_freegaza_gaza election_bts | | 25 | comparison - power - military power - vs - vs israel military | 58 | 25_comparison_power_military power_vs | | 26 | police - the - are people - gangster - spy | 58 | 26_police_the_are people_gangster | | 27 | hamas - subscribe to firstpost - to firstpost - firstpost - subscribe to | 57 | 27_hamas_subscribe to firstpost_to firstpost_firstpost | | 28 | protest - protest in - protesters - protests - in | 55 | 28_protest_protest in_protesters_protests | | 29 | israel palestine - palestine conflict - conflict - israel palestine conflict - israel palestine war | 54 | 29_israel palestine_palestine conflict_conflict_israel palestine conflict | | 30 | art - drawing - free palestine - lovepalestine - art free | 52 | 30_art_drawing_free palestine_lovepalestine | | 31 | prayforpalestine - freepalestine prayforpalestine - prayforpalestine savepalestine - freepalestine - freepalestine prayforpalestine savepalestine | 52 | 31_prayforpalestine_freepalestine prayforpalestine_prayforpalestine savepalestine_freepalestine | | 32 | savegazapalestine - freepalestine savegazapalestine - savegazapalestine freepalestine - freepalestine - savepalestine savegazapalestine | 51 | 32_savegazapalestine_freepalestine savegazapalestine_savegazapalestine freepalestine_freepalestine | | 33 | prophetmuhammadﷺ - prayforpalestine - allah - prophetmuhammadﷺ allah - allah palestine prayforpalestine | 50 | 33_prophetmuhammadﷺ_prayforpalestine_allah_prophetmuhammadﷺ allah | | 34 | hindu - india - dr - pm - vs | 49 | 34_hindu_india_dr_pm | | 35 | viral - viral prayforpalestine - viral prayforpalestine gaza - viral prayforpalestine gaza hamas - prayforpalestine gaza hamas | 49 | 35_viral_viral prayforpalestine_viral prayforpalestine gaza_viral prayforpalestine gaza hamas | | 36 | sakuraschoolsimulator - sakura - dramasakura - palestine sakuraschoolsimulator - free palestine sakuraschoolsimulator | 49 | 36_sakuraschoolsimulator_sakura_dramasakura_palestine sakuraschoolsimulator | | 37 | shorts - trending - finland - pangea - usa ww2 | 46 | 37_shorts_trending_finland_pangea | | 38 | free palestine shorts - palestine shorts - shorts free palestine - shorts free - free palestine | 46 | 38_free palestine shorts_palestine shorts_shorts free palestine_shorts free | | 39 | free palestine free - free palestine free palestine - palestine free palestine free - palestine free palestine - palestine free | 44 | 39_free palestine free_free palestine free palestine_palestine free palestine free_palestine free palestine | | 40 | countries - countries that - countries that support - that support - that | 44 | 40_countries_countries that_countries that support_that support | | 41 | war - warzone - israel warzone - warzone war - israel | 43 | 41_war_warzone_israel warzone_warzone war | | 42 | savegazapalestine - freepalestine savegazapalestine - savepalestina - savegazapalestine savepalestina - palestina | 42 | 42_savegazapalestine_freepalestine savegazapalestine_savepalestina_savegazapalestine savepalestina | | 43 | hamas - freedom fighters - freedom - fighters - hamas are | 40 | 43_hamas_freedom fighters_freedom_fighters | | 44 | youtubeshorts - free - youtube - free palestine - palestine youtubeshorts | 39 | 44_youtubeshorts_free_youtube_free palestine | | 45 | song - lagu - palestine song - singing - yeshua | 38 | 45_song_lagu_palestine song_singing | | 46 | boycott - products - boycott israel - boycottisraeliproducts - israel products | 38 | 46_boycott_products_boycott israel_boycottisraeliproducts | | 47 | humanitarian - crisis - humanitarian crisis - crisis in - crisis in gaza | 38 | 47_humanitarian_crisis_humanitarian crisis_crisis in | | 48 | she - girl - her - brave - woman | 36 | 48_she_girl_her_brave | | 49 | gaza savepalestina savegazapalestine - freepalestine palestine gaza savepalestina - palestine gaza savepalestina savegazapalestine - palestine gaza savepalestina - freepalestine palestine gaza | 35 | 49_gaza savepalestina savegazapalestine_freepalestine palestine gaza savepalestina_palestine gaza savepalestina savegazapalestine_palestine gaza savepalestina | | 50 | shorts israel palestine - shorts israel palestine war - israel palestine war - shorts israel - palestine war | 35 | 50_shorts israel palestine_shorts israel palestine war_israel palestine war_shorts israel | | 51 | bible - jesus - god - biblestorying - prophecy | 34 | 51_bible_jesus_god_biblestorying | | 52 | trendingnow - war update - breakingnews - update - war update news | 33 | 52_trendingnow_war update_breakingnews_update | | 53 | bendera - flag - palestine flag - bendera palestine - dan | 33 | 53_bendera_flag_palestine flag_bendera palestine | | 54 | genocide - case - south - south africa - genocide in | 32 | 54_genocide_case_south_south africa | | 55 | mohammedﷺ - madinasharif - madarsa - viralsupport - frinds | 31 | 55_mohammedﷺ_madinasharif_madarsa_viralsupport | | 56 | drone - drones - helikopter - military - vehicles | 31 | 56_drone_drones_helikopter_military | | 57 | save - children - save children - children save - children humanity gaza | 31 | 57_save_children_save children_children save | | 58 | forces kill - israeli forces kill - kill - israeli forces - forces | 30 | 58_forces kill_israeli forces kill_kill_israeli forces | | 59 | bendera - menggambar bendera - menggambar - prayforpalestine menggambar - prayforpalestine menggambar bendera | 30 | 59_bendera_menggambar bendera_menggambar_prayforpalestine menggambar | | 60 | palestinians - jenin - raid - kill - israeli forces kill | 29 | 60_palestinians_jenin_raid_kill | | 61 | israeli forces - forces - shorts israeli forces - fire - ground | 29 | 61_israeli forces_forces_shorts israeli forces_fire | | 62 | allah - ya - ya allah - islam - oh | 29 | 62_allah_ya_ya allah_islam | | 63 | humanitarian - crisis - humanitarian crisis - and - in gaza | 29 | 63_humanitarian_crisis_humanitarian crisis_and | | 64 | iran - attack on - jets - fighter - fighter jets | 28 | 64_iran_attack on_jets_fighter | | 65 | freepalestine ceasefire humanity gaza - freepalestine ceasefire humanity - freepalestine ceasefire - ceasefire humanity gaza - ceasefire | 28 | 65_freepalestine ceasefire humanity gaza_freepalestine ceasefire humanity_freepalestine ceasefire_ceasefire humanity gaza | | 66 | ronaldo - footballers - messi - palestine ronaldo - support | 28 | 66_ronaldo_footballers_messi_palestine ronaldo | | 67 | de - de 2023 - 2023 - de agosto de 2023 - de agosto de | 27 | 67_de_de 2023_2023_de agosto de 2023 | | 68 | child - palestinian - ceasefireingazanow ceasefire - palestine child - anak | 27 | 68_child_palestinian_ceasefireingazanow ceasefire_palestine child | | 69 | trending - israel warzone - trending viral - trending viral israel - warzone | 26 | 69_trending_israel warzone_trending viral_trending viral israel | | 70 | support israel - support - companies - that support israel - that support | 26 | 70_support israel_support_companies_that support israel | | 71 | israel palestine conflict - israel palestine - palestine conflict - conflict - israel palestine conflict shorts | 25 | 71_israel palestine conflict_israel palestine_palestine conflict_conflict | | 72 | stand with - stand - stand with palestine - with palestine - with | 25 | 72_stand with_stand_stand with palestine_with palestine | | 73 | israeli air - air - israeli air strikes - air strikes - strikes | 24 | 73_israeli air_air_israeli air strikes_air strikes | | 74 | settlements - settlements in - west bank - bank - west | 24 | 74_settlements_settlements in_west bank_bank | | 75 | israeli forces demolish - demolish - forces demolish - israeli forces demolish palestinian - forces demolish palestinian | 24 | 75_israeli forces demolish_demolish_forces demolish_israeli forces demolish palestinian | | 76 | com aline - aline - israel com - israel com aline - com | 23 | 76_com aline_aline_israel com_israel com aline | | 77 | funeral - killed - funeral for - mourners - killed in israeli | 23 | 77_funeral_killed_funeral for_mourners | | 78 | freepalaestine protest gazaunderattack savepalestine - religion freepalaestine protest - religion freepalaestine protest gazaunderattack - freepalaestine protest gazaunderattack - freepalaestine protest | 22 | 78_freepalaestine protest gazaunderattack savepalestine_religion freepalaestine protest_religion freepalaestine protest gazaunderattack_freepalaestine protest gazaunderattack | | 79 | speech - dj - speak - speaking - minister rishi | 22 | 79_speech_dj_speak_speaking | | 80 | youtubeshorts - shorts youtubeshorts - youtubeshorts shorts - freepalaestine youtubeshorts - youtubeshorts palestine | 22 | 80_youtubeshorts_shorts youtubeshorts_youtubeshorts shorts_freepalaestine youtubeshorts | | 81 | cat - kucing - crow - kitten - catshorts | 22 | 81_cat_kucing_crow_kitten | | 82 | childhood children - childhood - childhood children humanity gaza - childhood children humanity - children humanity | 22 | 82_childhood children_childhood_childhood children humanity gaza_childhood children humanity | | 83 | bank - west - west bank - squash - westbank | 22 | 83_bank_west_west bank_squash | | 84 | humanitarian - crisis - humanitarian crisis - global - crisis in israel | 21 | 84_humanitarian_crisis_humanitarian crisis_global | | 85 | warzone - israel warzone - youtubeshorts israel warzone - youtubeshorts - youtubeshorts israel | 21 | 85_warzone_israel warzone_youtubeshorts israel warzone_youtubeshorts | | 86 | humanity gaza - freepalestine palestine humanity gaza - freepalestine gaza humanity - palestine humanity gaza - freepalestine palestine humanity | 21 | 86_humanity gaza_freepalestine palestine humanity gaza_freepalestine gaza humanity_palestine humanity gaza | | 87 | shortvideo palestine ytshorts - humanity attitude islam - humanity attitude - attitude islam - attitude islam shortvideo | 21 | 87_shortvideo palestine ytshorts_humanity attitude islam_humanity attitude_attitude islam | | 88 | conflict what - hamas conflict what - hamas conflict what you - israel hamas conflict what - behind the israel hamas | 21 | 88_conflict what_hamas conflict what_hamas conflict what you_israel hamas conflict what | | 89 | stand - stand with - with palestine - stand with palestine - with | 20 | 89_stand_stand with_with palestine_stand with palestine | | 90 | food - אוכל - israel food - סוכרת אוכל - סוכרת | 20 | 90_food_אוכל_israel food_סוכרת אוכל | | 91 | israeli army israeli army - army israeli - army israeli army - israeli army israeli - israeli army | 20 | 91_israeli army israeli army_army israeli_army israeli army_israeli army israeli | | 92 | strongest - russian - navy - usa - strongest navy | 19 | 92_strongest_russian_navy_usa | | 93 | ronaldo - cristiano ronaldo - cristiano - football - ronaldo muslim | 19 | 93_ronaldo_cristiano ronaldo_cristiano_football | | 94 | emotional - islamic - islamic motivation - viral trending allah - motivation | 19 | 94_emotional_islamic_islamic motivation_viral trending allah | | 95 | save palestine - save - shorts save palestine - shorts save - palestine savepalestine | 19 | 95_save palestine_save_shorts save palestine_shorts save | | 96 | savegazapalestine - savepalestina - mashaallah - savepalestina savegazapalestine - savegazapalestine savegaza | 19 | 96_savegazapalestine_savepalestina_mashaallah_savepalestina savegazapalestine | | 97 | humanity - humanrights - peace - israel america - ytshorts humanity | 18 | 97_humanity_humanrights_peace_israel america | | 98 | shapiro - ben shapiro - ben - andrew - andrew tate | 18 | 98_shapiro_ben shapiro_ben_andrew | | 99 | save humanity - savehumanity - save - savehumanity savepalastine palestine humanity - savepalastine palestine humanity | 17 | 99_save humanity_savehumanity_save_savehumanity savepalastine palestine humanity | | 100 | ytshorts shortsfeed israel - shorts israel - kon - shortsfeed israel - ytshorts shortsfeed | 17 | 100_ytshorts shortsfeed israel_shorts israel_kon_shortsfeed israel | | 101 | israel military - power - military - israel military power - military power | 16 | 101_israel military_power_military_israel military power | | 102 | اولى القبلتين savegazapalestine shorts - القبلتين - vs israel اولى - israel اولى - israel اولى القبلتين | 16 | 102_اولى القبلتين savegazapalestine shorts_القبلتين_vs israel اولى_israel اولى | | 103 | this is humanity - humanity - is humanity - humanity palestine - kerosene | 16 | 103_this is humanity_humanity_is humanity_humanity palestine | | 104 | raid - israeli forces raid - forces raid - raid west bank - raid west | 16 | 104_raid_israeli forces raid_forces raid_raid west bank | | 105 | whatsapp - whatsapp status - sufyanashrfi - whatsapp status shorts - status | 16 | 105_whatsapp_whatsapp status_sufyanashrfi_whatsapp status shorts | | 106 | against humanity - keluargaperantau gaza war - crime - crimes - keluargaperantau gaza | 16 | 106_against humanity_keluargaperantau gaza war_crime_crimes | | 107 | history - evolution - israel history - of israel - evolution of | 15 | 107_history_evolution_israel history_of israel | | 108 | status - palestine attitude status - palestine attitude - attitude status - shorts viral palestine | 15 | 108_status_palestine attitude status_palestine attitude_attitude status | | 109 | malema - julius - julius malema - eff - eff leader | 15 | 109_malema_julius_julius malema_eff | | 110 | jews - orthodox - jews in - यह - jew | 15 | 110_jews_orthodox_jews in_यह | | 111 | sisters - shorts news - in west bank - in west - dies | 15 | 111_sisters_shorts news_in west bank_in west | | 112 | bike - superbike - saazidsaifi72 - bikelover - shorts bike | 15 | 112_bike_superbike_saazidsaifi72_bikelover | | 113 | israeli forces - forces - detained - detain - palestinian | 14 | 113_israeli forces_forces_detained_detain | | 114 | ll free - palestine ll free - palestine ll free palestine - ll free palestine - palestine ll | 14 | 114_ll free_palestine ll free_palestine ll free palestine_ll free palestine | | 115 | humanity - humanity is - israelpalestineconflict israel war - more humanity in - only formality humanity | 14 | 115_humanity_humanity is_israelpalestineconflict israel war_more humanity in | | 116 | hamas - israel hits hamas - israel hits hamas targets - hits hamas targets - hits hamas targets in | 14 | 116_hamas_israel hits hamas_israel hits hamas targets_hits hamas targets | | 117 | child - palestinian child - after - israeli air strikes - air strikes | 14 | 117_child_palestinian child_after_israeli air strikes | | 118 | race freepalestine viral - video shorts short - race - palestine vs israel simple - simple marbal race | 14 | 118_race freepalestine viral_video shorts short_race_palestine vs israel simple | | 119 | yemen - usa - war with the - and usa - یمن | 14 | 119_yemen_usa_war with the_and usa | | 120 | palestinian woman - woman - woman in - palestinian - israeli forces | 14 | 120_palestinian woman_woman_woman in_palestinian | | 121 | indonesia - dunia - piala - piala dunia - timnas israel | 14 | 121_indonesia_dunia_piala_piala dunia | | 122 | youtubeshorts - youtubeshorts shorts - youtube - india shorts - shorts youtubeshorts | 13 | 122_youtubeshorts_youtubeshorts shorts_youtube_india shorts | | 123 | will be free - be free - will be - palestine will be free - palestine will be | 13 | 123_will be free_be free_will be_palestine will be free | | 124 | killed - palestinian killed - palestinian - west bank - bank | 13 | 124_killed_palestinian killed_palestinian_west bank | | 125 | hospital in - watch israeli forces raid - watch israeli forces - hospital in gaza - hospital | 13 | 125_hospital in_watch israeli forces raid_watch israeli forces_hospital in gaza | | 126 | palestine aqsa masjeed - aqsa masjeed - masjeed - palestine aqsa - palestine support palestine aqsa | 12 | 126_palestine aqsa masjeed_aqsa masjeed_masjeed_palestine aqsa | | 127 | list of - palestine and israel informative - list of celebrities - informative - informative video | 12 | 127_list of_palestine and israel informative_list of celebrities_informative | | 128 | india - warzone india - india supports - india support - israel warzone india | 12 | 128_india_warzone india_india supports_india support | | 129 | nancy - south korea 4k status - south korea 4k - queen of south korea - queen of south | 12 | 129_nancy_south korea 4k status_south korea 4k_queen of south korea | | 130 | three - three palestinians - kill three - kill - forces kill | 12 | 130_three_three palestinians_kill three_kill | | 131 | dream - shorts islam - egypt israel - free palestine - islam trending allah | 12 | 131_dream_shorts islam_egypt israel_free palestine | | 132 | love - we love - love palestine - we - freepalestine love | 12 | 132_love_we love_love palestine_we | | 133 | salam - salam ya - ya - humanity salam ya - humanity salam | 12 | 133_salam_salam ya_ya_humanity salam ya | | 134 | save - save gaza - shorts save gaza - shorts save - save gaza save palestine | 12 | 134_save_save gaza_shorts save gaza_shorts save | | 135 | china - chinese - with palki sharma - vantage with palki - vantage with palki sharma | 12 | 135_china_chinese_with palki sharma_vantage with palki | | 136 | quotes - benjamin netanyahu quotes - benjaminnetanyahu israel - netanyahu quotes - viral benjaminnetanyahu israel warzone | 12 | 136_quotes_benjamin netanyahu quotes_benjaminnetanyahu israel_netanyahu quotes | | 137 | military - training - israel military - military israel - israel military training | 11 | 137_military_training_israel military_military israel | | 138 | australia - gaza humanity pray - gaza humanity pray savepalestine - humanity pray savepalestine - pray savepalestine | 11 | 138_australia_gaza humanity pray_gaza humanity pray savepalestine_humanity pray savepalestine | | 139 | settlements - bank city of nablus - occupied west bank city - city of nablus - of nablus | 11 | 139_settlements_bank city of nablus_occupied west bank city_city of nablus | | 140 | hate - hate israel - that hate - countries that hate - that hate israel | 11 | 140_hate_hate israel_that hate_countries that hate | | 141 | videos - islamic videos - religion - islamic - allah | 11 | 141_videos_islamic videos_religion_islamic | | 142 | genocide - stop - stopwars israel - stopwars israel palestine - the genocide | 11 | 142_genocide_stop_stopwars israel_stopwars israel palestine | | 143 | islam - palestine islam free palestine - palestine islam free - islam free palestine - free | 11 | 143_islam_palestine islam free palestine_palestine islam free_islam free palestine | | 144 | freepalestine palestina 2023 - gaza freepalestine palestina - gaza freepalestine palestina 2023 - palestina 2023 - ytshorts غزة | 11 | 144_freepalestine palestina 2023_gaza freepalestine palestina_gaza freepalestine palestina 2023_palestina 2023 | | 145 | history - history palestine - of palestine palestine gaza - history of palestine palestine - history of palestine | 10 | 145_history_history palestine_of palestine palestine gaza_history of palestine palestine | | 146 | deepens - humanitarian crisis deepens - crisis deepens - humanitarian crisis - crisis | 10 | 146_deepens_humanitarian crisis deepens_crisis deepens_humanitarian crisis | | 147 | allah save - allah save gaza - allah - ya allah save gaza - ya allah save | 10 | 147_allah save_allah save gaza_allah_ya allah save gaza | | 148 | rally - palestine rally - kingdom shorts freepalestine - kingdom shorts - in united kingdom | 10 | 148_rally_palestine rally_kingdom shorts freepalestine_kingdom shorts | </details> ## Training hyperparameters * calculate_probabilities: False * language: multilingual * low_memory: False * min_topic_size: 10 * n_gram_range: (1, 4) * nr_topics: None * seed_topic_list: None * top_n_words: 25 * verbose: False * zeroshot_min_similarity: 0.7 * zeroshot_topic_list: None ## Framework versions * Numpy: 1.24.3 * HDBSCAN: 0.8.33 * UMAP: 0.5.5 * Pandas: 2.1.4 * Scikit-Learn: 1.2.2 * Sentence-transformers: 2.2.2 * Transformers: 4.36.2 * Numba: 0.58.1 * Plotly: 5.16.1 * Python: 3.10.12
jeiku/NewJeans_3B_GGUF
jeiku
2024-01-27T07:02:14Z
10
0
null
[ "gguf", "mergekit", "merge", "arxiv:2203.05482", "endpoints_compatible", "region:us", "conversational" ]
null
2024-01-27T06:20:56Z
--- base_model: - jeiku/Gnosis_StableLM tags: - mergekit - merge --- # mumufinal This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * mumu2 + [jeiku/Gnosis_StableLM](https://huggingface.co/jeiku/Gnosis_StableLM) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: linear models: - model: mumu2+jeiku/Gnosis_StableLM parameters: weight: 1 dtype: float16 ```
TMOU715/phi-2-qlora
TMOU715
2024-01-27T06:54:35Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "region:us" ]
null
2024-01-27T06:54:30Z
--- library_name: peft base_model: microsoft/phi-2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
Sailor01/phi-2-qlora
Sailor01
2024-01-27T06:53:33Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "region:us" ]
null
2024-01-27T06:53:29Z
--- library_name: peft base_model: microsoft/phi-2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
fong33/phi-2-qlora
fong33
2024-01-27T06:53:22Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "region:us" ]
null
2024-01-27T06:53:17Z
--- library_name: peft base_model: microsoft/phi-2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
Askahoward/phi-2-qlora
Askahoward
2024-01-27T06:53:15Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "region:us" ]
null
2024-01-27T06:53:12Z
--- library_name: peft base_model: microsoft/phi-2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
Wahlaalne/phi-2-qlora
Wahlaalne
2024-01-27T06:53:07Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "region:us" ]
null
2024-01-27T06:53:00Z
--- library_name: peft base_model: microsoft/phi-2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
ramsi-k/dqn-SpaceInvadersNoFrameskip-v4
ramsi-k
2024-01-27T06:52:48Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-27T06:52:14Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 629.50 +/- 244.17 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ramsi-k -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ramsi-k -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ramsi-k ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
yoshinori-sano/bert-base-japanese-v3-wrime-sentiment
yoshinori-sano
2024-01-27T06:43:26Z
121
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-27T05:52:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gotchu/34b-3
gotchu
2024-01-27T06:39:26Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:gotchu/merge-34b-2", "base_model:merge:gotchu/merge-34b-2", "base_model:gotchu/roleplaymodel", "base_model:merge:gotchu/roleplaymodel", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T06:19:44Z
--- base_model: - gotchu/roleplaymodel - gotchu/merge-34b-2 tags: - mergekit - merge --- # merged This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [gotchu/roleplaymodel](https://huggingface.co/gotchu/roleplaymodel) * [gotchu/merge-34b-2](https://huggingface.co/gotchu/merge-34b-2) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: model: path: gotchu/merge-34b-2 dtype: float16 merge_method: slerp parameters: t: - filter: self_attn value: [0.0, 0.5, 0.3, 0.7, 1.0] - filter: mlp value: [1.0, 0.5, 0.7, 0.3, 0.0] - value: 0.5 slices: - sources: - layer_range: [0, 60] model: model: path: gotchu/merge-34b-2 - layer_range: [0, 60] model: model: path: gotchu/roleplaymodel ```
Chen311/Lora1.5
Chen311
2024-01-27T06:33:35Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-11-18T01:50:17Z
--- license: creativeml-openrail-m ---
psugam/hello
psugam
2024-01-27T06:31:02Z
120
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-27T06:28:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
0x7o/fialka-13B-v4
0x7o
2024-01-27T06:22:56Z
79
4
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "ru", "dataset:0x7194633/fialka-v3-data", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-21T11:26:20Z
--- license: apache-2.0 datasets: - 0x7194633/fialka-v3-data language: - ru pipeline_tag: text-generation --- # Fialka v4.0 13B ![Violet](https://i.imgur.com/tQKuehR.png) ## Description Fialka language models are trained to follow instructions and maintain communication in Russian. The fourth version of Fialka is the third version optimized through RLHF. More responsive and more informative. ## Usage The model has a query format as in zephyr. ``` <|user|> Что такое мем?</s> <|assistant|> Мем (англ. meme) - это единица социального поведения, которая быстро распространяется в интернете или в социальных сетях с целью передачи информации и идей. Обычно мемы являются шутками, стишками, изображениями или видео и имеют юмористический или сатирический характер, но могут содержать и более серьезные идеи, такие как политические или социальные протесты, и даже угрозы. Мемы могут служить для создания и распространения контента и информации, а также для выражения мнения или чувств автора. ```
vadhri/dqn-SpaceInvadersNoFrameskip-v4
vadhri
2024-01-27T06:19:32Z
4
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-27T06:18:58Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 530.50 +/- 106.76 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga vadhri -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga vadhri -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga vadhri ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
frankc350/opt-125m-sft
frankc350
2024-01-27T06:19:24Z
178
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T06:12:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Heng666/opt-125m-sft
Heng666
2024-01-27T06:18:35Z
181
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T06:13:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Crystalcareai/CrystalMistral_7b_v.04
Crystalcareai
2024-01-27T06:03:57Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "Crystalcareai/CrystalMistral_7b_v.01", "Crystalcareai/CrystalMistral_7b_v.02", "conversational", "base_model:Crystalcareai/CrystalMistral_7b_v.01", "base_model:merge:Crystalcareai/CrystalMistral_7b_v.01", "base_model:Crystalcareai/CrystalMistral_7b_v.02", "base_model:merge:Crystalcareai/CrystalMistral_7b_v.02", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T05:58:17Z
--- tags: - merge - mergekit - lazymergekit - Crystalcareai/CrystalMistral_7b_v.01 - Crystalcareai/CrystalMistral_7b_v.02 base_model: - Crystalcareai/CrystalMistral_7b_v.01 - Crystalcareai/CrystalMistral_7b_v.02 --- # CrystalMistral_7b_v.04 CrystalMistral_7b_v.04 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Crystalcareai/CrystalMistral_7b_v.01](https://huggingface.co/Crystalcareai/CrystalMistral_7b_v.01) * [Crystalcareai/CrystalMistral_7b_v.02](https://huggingface.co/Crystalcareai/CrystalMistral_7b_v.02) ## 🧩 Configuration ```yaml slices: - sources: - model: Crystalcareai/CrystalMistral_7b_v.01 layer_range: [0, 32] - model: Crystalcareai/CrystalMistral_7b_v.02 layer_range: [0, 32] merge_method: slerp base_model: Crystalcareai/CrystalMistral_7b_v.01 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Crystalcareai/CrystalMistral_7b_v.04" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
LazarusNLP/all-indo-e5-small-v2
LazarusNLP
2024-01-27T06:01:48Z
464
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "ind", "dataset:indonli", "dataset:indolem/indo_story_cloze", "dataset:unicamp-dl/mmarco", "dataset:miracl/miracl", "dataset:SEACrowd/wrete", "dataset:SEACrowd/indolem_ntp", "dataset:khalidalt/tydiqa-goldp", "dataset:SEACrowd/facqa", "dataset:indonesian-nlp/lfqa_id", "dataset:jakartaresearch/indoqa", "dataset:jakartaresearch/id-paraphrase-detection", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-01-27T04:23:40Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers datasets: - indonli - indolem/indo_story_cloze - unicamp-dl/mmarco - miracl/miracl - SEACrowd/wrete - SEACrowd/indolem_ntp - khalidalt/tydiqa-goldp - SEACrowd/facqa - indonesian-nlp/lfqa_id - jakartaresearch/indoqa - jakartaresearch/id-paraphrase-detection language: - ind --- # LazarusNLP/all-indo-e5-small-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('LazarusNLP/all-indo-e5-small-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('LazarusNLP/all-indo-e5-small-v2') model = AutoModel.from_pretrained('LazarusNLP/all-indo-e5-small-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=LazarusNLP/all-indo-e5-small-v2) ## Training The model was trained with the parameters: **DataLoader**: `MultiDatasetDataLoader.MultiDatasetDataLoader` of length 968 with parameters: ``` {'batch_size_pairs': 384, 'batch_size_triplets': 256} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 0, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "eps": 1e-06, "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 484, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Dhanraj1503/a2c-PandaReachDense-v3
Dhanraj1503
2024-01-27T05:55:18Z
1
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-27T05:51:09Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.18 +/- 0.13 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Norman94/CNN_truck_car_others
Norman94
2024-01-27T05:50:39Z
0
0
null
[ "en", "dataset:cifar10", "region:us" ]
null
2024-01-27T05:41:48Z
--- datasets: - cifar10 language: - en metrics: - accuracy ---
ashishbaraiya/my-tweets-finetuned
ashishbaraiya
2024-01-27T05:25:49Z
1
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T18:38:44Z
--- license: mit base_model: gpt2 tags: - generated_from_keras_callback model-index: - name: ashishbaraiya/my-tweets-finetuned results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ashishbaraiya/my-tweets-finetuned This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0656 - Validation Loss: 3.2945 - Epoch: 98 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 4500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 9.3483 | 8.3624 | 0 | | 7.2778 | 6.9685 | 1 | | 5.9195 | 6.2234 | 2 | | 5.0730 | 5.6830 | 3 | | 4.4703 | 5.3916 | 4 | | 3.8427 | 4.8847 | 5 | | 3.3641 | 4.5318 | 6 | | 2.8373 | 4.3084 | 7 | | 2.4261 | 4.0802 | 8 | | 2.0691 | 3.8920 | 9 | | 1.8213 | 3.8208 | 10 | | 1.5922 | 3.6103 | 11 | | 1.3694 | 3.5038 | 12 | | 1.1764 | 3.3149 | 13 | | 1.0135 | 3.2981 | 14 | | 0.8874 | 3.2975 | 15 | | 0.7716 | 3.2103 | 16 | | 0.6679 | 3.3297 | 17 | | 0.5770 | 3.2517 | 18 | | 0.5098 | 3.0959 | 19 | | 0.4403 | 3.1526 | 20 | | 0.3791 | 2.9750 | 21 | | 0.3367 | 3.0588 | 22 | | 0.3027 | 3.0408 | 23 | | 0.2617 | 3.1930 | 24 | | 0.2387 | 3.1227 | 25 | | 0.2175 | 3.0582 | 26 | | 0.2062 | 3.1239 | 27 | | 0.1868 | 3.0407 | 28 | | 0.1746 | 3.2357 | 29 | | 0.1657 | 3.1285 | 30 | | 0.1536 | 3.2110 | 31 | | 0.1512 | 3.1890 | 32 | | 0.1447 | 3.1713 | 33 | | 0.1426 | 3.1498 | 34 | | 0.1369 | 3.1877 | 35 | | 0.1327 | 3.2019 | 36 | | 0.1303 | 3.0486 | 37 | | 0.1213 | 3.1264 | 38 | | 0.1204 | 3.1468 | 39 | | 0.1206 | 3.1846 | 40 | | 0.1125 | 3.1880 | 41 | | 0.1113 | 3.1980 | 42 | | 0.1098 | 3.1759 | 43 | | 0.1071 | 3.1385 | 44 | | 0.1055 | 3.1730 | 45 | | 0.1024 | 3.1820 | 46 | | 0.0995 | 3.1252 | 47 | | 0.0995 | 3.1279 | 48 | | 0.1004 | 3.2428 | 49 | | 0.0982 | 3.1116 | 50 | | 0.0957 | 3.2210 | 51 | | 0.0936 | 3.1351 | 52 | | 0.0917 | 3.1618 | 53 | | 0.0930 | 3.1924 | 54 | | 0.0929 | 3.2831 | 55 | | 0.0889 | 3.2458 | 56 | | 0.0913 | 3.2061 | 57 | | 0.0899 | 3.4128 | 58 | | 0.0880 | 3.2114 | 59 | | 0.0869 | 3.2738 | 60 | | 0.0878 | 3.1723 | 61 | | 0.0844 | 3.1465 | 62 | | 0.0846 | 3.1106 | 63 | | 0.0841 | 3.2216 | 64 | | 0.0824 | 3.2971 | 65 | | 0.0823 | 3.2267 | 66 | | 0.0811 | 3.2503 | 67 | | 0.0823 | 3.1981 | 68 | | 0.0808 | 3.2618 | 69 | | 0.0803 | 3.1607 | 70 | | 0.0786 | 3.3295 | 71 | | 0.0801 | 3.2952 | 72 | | 0.0777 | 3.2545 | 73 | | 0.0764 | 3.1248 | 74 | | 0.0772 | 3.2185 | 75 | | 0.0758 | 3.3147 | 76 | | 0.0764 | 3.1842 | 77 | | 0.0758 | 3.2346 | 78 | | 0.0739 | 3.2914 | 79 | | 0.0738 | 3.2163 | 80 | | 0.0738 | 3.3555 | 81 | | 0.0731 | 3.0948 | 82 | | 0.0726 | 3.2040 | 83 | | 0.0729 | 3.2187 | 84 | | 0.0709 | 3.2877 | 85 | | 0.0703 | 3.3668 | 86 | | 0.0709 | 3.2290 | 87 | | 0.0712 | 3.3148 | 88 | | 0.0697 | 3.2762 | 89 | | 0.0694 | 3.2083 | 90 | | 0.0688 | 3.2673 | 91 | | 0.0694 | 3.2816 | 92 | | 0.0683 | 3.3135 | 93 | | 0.0680 | 3.2971 | 94 | | 0.0681 | 3.2272 | 95 | | 0.0670 | 3.2317 | 96 | | 0.0662 | 3.2029 | 97 | | 0.0656 | 3.2945 | 98 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.15.0 - Datasets 2.16.1 - Tokenizers 0.15.1
JoeySalmons/ldt-100k_images
JoeySalmons
2024-01-27T05:14:28Z
0
0
null
[ "region:us" ]
null
2024-01-25T20:14:41Z
# Overview These are latent diffusion transformer models trained from scratch on 100k 256x256 images. Checkpoint 278k-full_state_dict.pth has been trained on about 500 epochs and is well into being overfit on the 100k training images. The two checkpoints for 300k and 395k steps were further trained on a Midjourney dataset of 600k images for 9.4 epochs (300k steps) and 50 epochs (395k steps) at a constant LR of 5e-5. The additional training on the MJ dataset took ~8 hours on a 4090 with batch size 256. The models are the same as in the Google Colabs below: embed_dim=512, n_layers=8, total parameters=30507328 (30M) # Run the Models in Colab https://colab.research.google.com/drive/10yORcKXT40DLvZSceOJ1Hi5z_p5r-bOs?usp=sharing # Colab Training Notebook https://colab.research.google.com/drive/1sKk0usxEF4bmdCDcNQJQNMt4l9qBOeAM?usp=sharing # Github Repo This repo contains the original training code: https://github.com/apapiu/transformer_latent_diffusion # Datasets used https://huggingface.co/apapiu/small_ldt/tree/main # Other See this Reddit post by u/spring_m (huggingface.co/apapiu) for some more information: https://www.reddit.com/r/MachineLearning/comments/198eiv1/p_small_latent_diffusion_transformer_from_scratch/
lbtutor/Taxi-v3
lbtutor
2024-01-27T05:03:06Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-27T05:02:58Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.50 +/- 2.67 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="lbtutor/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
stablediffusionapi/yayoimix
stablediffusionapi
2024-01-27T04:59:32Z
60
2
diffusers
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-01-27T04:57:21Z
--- license: creativeml-openrail-m tags: - modelslab.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # yayoi_mix API Inference ![generated from modelslab.com](https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/4828961041706331379.png) ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "yayoimix" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs) Try model for free: [Generate Images](https://modelslab.com/models/yayoimix) Model link: [View model](https://modelslab.com/models/yayoimix) View all models: [View Models](https://modelslab.com/models) import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "yayoimix", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
lbtutor/q-FrozenLake-v1-4x4-noSlippery
lbtutor
2024-01-27T04:56:22Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-27T04:56:20Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="lbtutor/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Slava/tiny-bert-sst2-distilled
Slava
2024-01-27T04:51:14Z
101
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google/bert_uncased_L-2_H-128_A-2", "base_model:finetune:google/bert_uncased_L-2_H-128_A-2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-25T08:06:21Z
--- license: apache-2.0 base_model: google/bert_uncased_L-2_H-128_A-2 tags: - generated_from_trainer metrics: - accuracy model-index: - name: tiny-bert-sst2-distilled results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-bert-sst2-distilled This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9648 - Accuracy: 0.8245 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002628217875157273 - train_batch_size: 128 - eval_batch_size: 128 - seed: 33 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.5717 | 1.0 | 527 | 2.0086 | 0.8073 | | 1.2017 | 2.0 | 1054 | 1.8121 | 0.8222 | | 0.9081 | 3.0 | 1581 | 1.8837 | 0.8177 | | 0.7559 | 4.0 | 2108 | 1.9089 | 0.8234 | | 0.6694 | 5.0 | 2635 | 1.9749 | 0.8177 | | 0.6147 | 6.0 | 3162 | 1.9445 | 0.8257 | | 0.5729 | 7.0 | 3689 | 1.9648 | 0.8245 | ### Framework versions - Transformers 4.37.1 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.1
xyfJASON/Diffusion-Models-Implementations
xyfJASON
2024-01-27T04:47:02Z
0
0
null
[ "tensorboard", "dataset:cifar10", "license:mit", "region:us" ]
null
2023-08-10T03:37:36Z
--- license: mit datasets: - cifar10 metrics: - fid --- Checkpoints and training logs for GitHub repository: [xyfJASON/Diffusion-Models-Implementations](https://github.com/xyfJASON/Diffusion-Models-Implementations).
asparius/UNSEE-Barlow
asparius
2024-01-27T04:37:02Z
44
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-08-30T14:08:03Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # asparius/UNSEE-Barlow This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('asparius/barlow-72.15') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('asparius/barlow-72.15') model = AutoModel.from_pretrained('asparius/barlow-72.15') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=asparius/barlow-72.15) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 31250 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.BYOLoss.BYOLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 3125, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 0.0001 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 3125, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 64, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
asparius/UNSEE-VICReg
asparius
2024-01-27T04:34:16Z
44
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-08-22T11:41:43Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # asparius/UNSEE-VICReg This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('asparius/vicreg-73.10') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('asparius/vicreg-73.10') model = AutoModel.from_pretrained('asparius/vicreg-73.10') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=asparius/vicreg-73.10) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 31250 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.BYOLoss.BYOLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 3125, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 0.0001 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 3125, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 64, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
asparius/UNSEE-BYOL
asparius
2024-01-27T04:33:23Z
44
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-08-22T11:03:49Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # asparius/UNSEE-BYOL This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('asparius/byol-73.03') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('asparius/byol-73.03') model = AutoModel.from_pretrained('asparius/byol-73.03') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=asparius/byol-73.03) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 31250 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.BYOLoss.BYOLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 3125, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 0.0001 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 3125, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 64, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
ycros/llmTechChat-GGUF
ycros
2024-01-27T04:19:23Z
9
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-01-27T03:31:59Z
GGUF quants of https://huggingface.co/Epiculous/llmTechChat
ramsi-k/Taxi-v3-2
ramsi-k
2024-01-27T04:06:57Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-27T04:06:53Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3-2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="ramsi-k/Taxi-v3-2", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
MandilNLPwizard/llama-2-7b-platypus-quantised
MandilNLPwizard
2024-01-27T04:01:00Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-23T01:30:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nitky/Superswallow-7b-v0.2
nitky
2024-01-27T03:52:37Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "en", "ja", "arxiv:2311.10702", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:allenai/tulu-2-dpo-7b", "base_model:merge:allenai/tulu-2-dpo-7b", "base_model:tokyotech-llm/Swallow-7b-instruct-hf", "base_model:merge:tokyotech-llm/Swallow-7b-instruct-hf", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T01:20:52Z
--- base_model: - tokyotech-llm/Swallow-7b-instruct-hf - allenai/tulu-2-dpo-7b tags: - mergekit - merge language: - en - ja library_name: transformers pipeline_tag: text-generation license: llama2 model_type: llama --- # Superswallow-7b-v0.2 **Known Performance Issues:** Swallow 7B's may have unstable output with `Null preset` of text-generation-webui, and this model also inherits that problem. **Important Notice:** This model partially utilizes the parameters of Tulu V2 DPO finetuned based on Llama 2, so it may inherit the AI2 ImpACT license. Please use the model keeping in mind that there may be changes regarding the license if AI2 contacts me. The [AI2 ImpACT license](https://allenai.org/impact-license) includes information about data artifacts and model artifacts, but does not cover the case of directly applying parts of the LLM parameters of a model artifact to other models. However, I respect their research and great work, so I will change the license immediately if AI2 contacts me. ## Description This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). The model was created by injecting the ability to follow user intent from [Tulu 2 DPO](https://arxiv.org/abs/2311.10702) into the [Swallow](https://zenn.dev/tokyotech_lm/articles/d6cb3a8fdfc907) instract model. It was a proof of concept for merging LLMs trained in other languages, and paid close attention to preserving the linguistic capabilities of the merge-based model. As far as I know, Swallow is the full set Llama 2 model(7B, 13B, 70B) that can output the most beautiful Japanese. Therefore, I used it as the base model for merging this time. Thank you for their wonderful work. ## Test environment This model was tested using [text-generation-webui](https://github.com/oobabooga/text-generation-webui/tree/main). I use preset `simple-1` and `Null preset` for Generation. ### Recommendation Use `simple-1` settings: - temperature: 0.7 - top_p: 0.9 - repetition_penalty: 1.15 - top_k: 20 ### Tested `temperature` Range - temperature: 0.3 - 1.0 It works fine in most cases, but depending on the prompt, the output may become unstable at temperatures around 1.0. ### Tested `repetition_penalty` Range - repetition_penalty: 1.0 - 1.15 It works fine in most cases, but depending on the prompt, the output may become repetition at repetition_penalty around 1.0. ## Prompt template ### Tulu Style (Recommended format) ``` <|user|> Your message here! <|assistant|> ``` For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.** ### Swallow Style (Alpaca format) ``` 以下に、あるタスクを説明する指示があり、それに付随する入力が更なる文脈を提供しています。リクエストを適切に完了するための回答を記述してください。 ### 指示: {instruction} ### 応答: ``` ## Use the instruct model ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "nitky/Superswallow-7b-v0.2" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, device_map="auto") PROMPT_DICT = { "prompt_input": ( "以下に、あるタスクを説明する指示があり、それに付随する入力が更なる文脈を提供しています。" "リクエストを適切に完了するための回答を記述してください。\n\n" "### 指示:\n{instruction}\n\n### 入力:\n{input}\n\n### 応答:" ), "prompt_no_input": ( "以下に、あるタスクを説明する指示があります。" "リクエストを適切に完了するための回答を記述してください。\n\n" "### 指示:\n{instruction}\n\n### 応答:" ), } def create_prompt(instruction, input=None): """ Generates a prompt based on the given instruction and an optional input. If input is provided, it uses the 'prompt_input' template from PROMPT_DICT. If no input is provided, it uses the 'prompt_no_input' template. Args: instruction (str): The instruction describing the task. input (str, optional): Additional input providing context for the task. Default is None. Returns: str: The generated prompt. """ if input: # Use the 'prompt_input' template when additional input is provided return PROMPT_DICT["prompt_input"].format(instruction=instruction, input=input) else: # Use the 'prompt_no_input' template when no additional input is provided return PROMPT_DICT["prompt_no_input"].format(instruction=instruction) # Example usage instruction_example = "以下のトピックに関する詳細な情報を提供してください。" input_example = "東京工業大学の主なキャンパスについて教えてください" prompt = create_prompt(instruction_example, input_example) input_ids = tokenizer.encode( prompt, add_special_tokens=False, return_tensors="pt" ) tokens = model.generate( input_ids.to(device=model.device), max_new_tokens=200, temperature=0.7, top_p=0.9, repetition_penalty=1.15, top_k=20, do_sample=True, ) out = tokenizer.decode(tokens[0], skip_special_tokens=True) print(out) ``` ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) and the SLERP merge method using [tokyotech-llm/Swallow-7b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-hf) as a base. ### Models Merged The following models were included in the merge: * [allenai/tulu-2-dpo-7b](https://huggingface.co/allenai/tulu-2-dpo-7b) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: tokyotech-llm/Swallow-7b-instruct-hf # no parameters necessary for base model - model: allenai/tulu-2-dpo-7b # follow user intent parameters: density: 1 weight: - filter: mlp.down_proj value: [0.45, 0.10, 0.45, 0.10, 0.45, 0.10, 0.45, 0.10, 0.10] - filter: mlp.gate_proj value: [0.70, 0.10, 0.45, 0.10, 0.45, 0.10, 0.45, 0.10, 0.10] - filter: mlp.up_proj value: [0.70, 0.10, 0.45, 0.10, 0.45, 0.10, 0.45, 0.10, 0.10] - filter: self_attn value: [0.70, 0.45, 0.10, 0.45, 0.10, 0.45, 0.10, 0.45, 0.45] - value: 0 # fallback for rest of tensors. merge_method: dare_ties base_model: tokyotech-llm/Swallow-7b-instruct-hf dtype: bfloat16 tokenizer_source: union name: Superswallow-7b-v0.2-flavor --- slices: - sources: - model: nitky/Superswallow-7b-baseline layer_range: [0, 32] - model: Superswallow-7b-v0.2-flavor layer_range: [0, 32] merge_method: slerp base_model: nitky/Superswallow-7b-baseline parameters: t: # model stabilization - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 # fallback for rest of tensors dtype: bfloat16 name: Superswallow-7b-v0.2 ```
ramsi-k/Taxi-v3
ramsi-k
2024-01-27T03:42:08Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-27T03:42:05Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="ramsi-k/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
elliotthwangmsa/phi-2_zh
elliotthwangmsa
2024-01-27T03:41:40Z
35
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T03:38:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
chathuranga-jayanath/codet5-small-v4
chathuranga-jayanath
2024-01-27T03:24:52Z
92
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:Salesforce/codet5-small", "base_model:finetune:Salesforce/codet5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-27T03:24:26Z
--- license: apache-2.0 base_model: Salesforce/codet5-small tags: - generated_from_trainer model-index: - name: codet5-small-v4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codet5-small-v4 This model is a fine-tuned version of [Salesforce/codet5-small](https://huggingface.co/Salesforce/codet5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7318 - Bleu Score: 0.2737 - Gen Len: 13.7838 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu Score | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:----------:|:-------:| | No log | 1.0 | 20 | 1.7693 | 0.281 | 13.5946 | | No log | 2.0 | 40 | 1.0720 | 0.2706 | 13.9189 | | No log | 3.0 | 60 | 0.7318 | 0.2737 | 13.7838 | ### Framework versions - Transformers 4.38.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
AnonWilber/ppo-LunarLander-v2
AnonWilber
2024-01-27T03:20:44Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-27T03:20:15Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 150.50 +/- 71.47 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
LN1996/peft-qlora-run3
LN1996
2024-01-27T03:16:44Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "region:us" ]
null
2024-01-27T03:16:12Z
--- library_name: peft base_model: microsoft/phi-2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
rishikasrinivas/distilbert-base-uncased-finetuned-ner
rishikasrinivas
2024-01-27T02:54:49Z
92
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "token-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-01-26T20:00:18Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0609 - Precision: 0.9243 - Recall: 0.9358 - F1: 0.9300 - Accuracy: 0.9836 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2386 | 1.0 | 878 | 0.0710 | 0.9017 | 0.9207 | 0.9111 | 0.9798 | | 0.0498 | 2.0 | 1756 | 0.0619 | 0.9239 | 0.9319 | 0.9279 | 0.9830 | | 0.0308 | 3.0 | 2634 | 0.0609 | 0.9243 | 0.9358 | 0.9300 | 0.9836 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
KnutJaegersberg/SUS-Chat-72B-4bit
KnutJaegersberg
2024-01-27T02:44:25Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-01-26T11:30:45Z
--- license: other license_name: qwen license_link: LICENSE --- Prompt Example: ``` ### Human: Explain how AGI and collective intelligence are logically connected. ### Assistant: ``` Quantization settings: {'load_in_4bit': True, 'bnb_4bit_compute_dtype': torch.bfloat16, 'bnb_4bit_quant_type': 'nf4', 'bnb_4bit_use_double_quant': True}
mbearss/dummy-model
mbearss
2024-01-27T02:27:15Z
93
0
transformers
[ "transformers", "safetensors", "camembert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-01-27T02:25:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
abrahamabelboodala/ALPHAGEOMETRY_ag_ckpt_vocab
abrahamabelboodala
2024-01-27T02:18:52Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-01-27T01:23:00Z
--- license: apache-2.0 # Copyright 2023 DeepMind Technologies Limited # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== --- # Alphageometry Weights and Vocabulary Setup This repository provides an alternative method for setting up alphageometry, particularly useful for users encountering errors with the `bash download.sh` script from Google Drive. ## Common Error When running the `download.sh` script, you may encounter the following error: ```bash bash download.sh # Error Output: DATA=ag_ckpt_vocab Retrieving folder list Failed to retrieve folder contents: file/folder name cannot be extracted from: ag_ckpt_vocab – Google Drive ``` ## Resolution Steps 1. **Create a Folder:** - At the root of your alphageometry folder, create a new folder named `ag_ckpt_vocab`. Example: `alphageometry/ag_ckpt_vocab` 2. **Download Required Files:** - Download and place the following three files inside your `ag_ckpt_vocab` folder: 1. `geometry.757.vocab` 2. `geometry.757.model` 3. `checkpoint_10999999` 3. **Set Environment Variable:** - Open a bash shell at the root of your alphageometry folder. - Execute the following command: ```bash export DATA=ag_ckpt_vocab ``` ### Alternative to Step 3 Alternatively, you can automate the environment variable setup: 1. Create a file named `extrasetup.sh` in the root of your alphageometry folder. 2. Add the following line to `extrasetup.sh`: ```bash export DATA=ag_ckpt_vocab ``` 3. In a bash shell at the root of your alphageometry folder, execute: ```bash bash extrasetup.sh ```
MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF
MaziyarPanahi
2024-01-27T02:18:15Z
60
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "lcw99/zephykor-ko-beta-7b-chang", "ko", "en", "autotrain_compatible", "endpoints_compatible", "region:us", "license:apache-2.0", "base_model:MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1", "conversational" ]
text-generation
2024-01-27T02:09:24Z
--- license: apache-2.0 tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - Safetensors - text-generation-inference - merge - 7b - mistralai/Mistral-7B-Instruct-v0.1 - lcw99/zephykor-ko-beta-7b-chang - ko - en - autotrain_compatible - endpoints_compatible - region:us - license:apache-2.0 model_name: zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1) ## Description [MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./zephykor-ko-beta-7b-chang-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
kanishka/smolm-autoreg-bpe-counterfactual-babylm-keys_to_pipps_all-1e-3
kanishka
2024-01-27T01:59:03Z
16
0
transformers
[ "transformers", "tensorboard", "safetensors", "opt", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T02:55:09Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: smolm-autoreg-bpe-counterfactual-babylm-keys_to_pipps_all-1e-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smolm-autoreg-bpe-counterfactual-babylm-keys_to_pipps_all-1e-3 This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.3449 - Accuracy: 0.4108 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 32000 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 3.6001 | 1.0 | 19014 | 3.7251 | 0.3609 | | 3.3959 | 2.0 | 38028 | 3.5040 | 0.3820 | | 3.2662 | 3.0 | 57042 | 3.4255 | 0.3932 | | 3.1892 | 4.0 | 76056 | 3.3537 | 0.3992 | | 3.134 | 5.0 | 95070 | 3.3279 | 0.4033 | | 3.0941 | 6.0 | 114084 | 3.3172 | 0.4041 | | 3.0584 | 7.0 | 133098 | 3.3045 | 0.4065 | | 3.026 | 8.0 | 152112 | 3.3046 | 0.4071 | | 2.9961 | 9.0 | 171126 | 3.2926 | 0.4084 | | 2.974 | 10.0 | 190140 | 3.2889 | 0.4090 | | 2.9516 | 11.0 | 209154 | 3.3076 | 0.4090 | | 2.9292 | 12.0 | 228168 | 3.2888 | 0.4104 | | 2.9071 | 13.0 | 247182 | 3.2999 | 0.4101 | | 2.8845 | 14.0 | 266196 | 3.3064 | 0.4107 | | 2.8669 | 15.0 | 285210 | 3.3155 | 0.4104 | | 2.8505 | 16.0 | 304224 | 3.3300 | 0.4105 | | 2.8318 | 17.0 | 323238 | 3.3195 | 0.4110 | | 2.8155 | 18.0 | 342252 | 3.3315 | 0.4109 | | 2.7914 | 19.0 | 361266 | 3.3386 | 0.4109 | | 2.7769 | 20.0 | 380280 | 3.3449 | 0.4108 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.14.1
upaya07/Arithmo2-Mistral-7B-adapter
upaya07
2024-01-27T01:54:42Z
141
2
peft
[ "peft", "Mathematical Reasoning", "en", "dataset:akjindal53244/Arithmo-Data", "arxiv:2309.12284", "arxiv:2309.05653", "arxiv:2210.17517", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:mit", "region:us" ]
null
2024-01-14T07:16:12Z
--- library_name: peft base_model: mistralai/Mistral-7B-v0.1 license: mit tags: - Mathematical Reasoning datasets: - akjindal53244/Arithmo-Data language: - en --- **Arithmo2-Mistral-7B** model improves initially released [Arithmo-Mistral-7B](https://huggingface.co/akjindal53244/Arithmo-Mistral-7B) model on both GSM8K and MATH benchmarks. Specifically, there is **absolute** improvement of: - +1.7% on GSM8K - +3.0% on GSM8K PoT - +1.9% on MATH **This repo contains LoRA adapter weights**. If you are interested in final merged model, use [Arithmo2-Mistral-7B](https://huggingface.co/upaya07/Arithmo2-Mistral-7B) instead. ### Model Description - **Project GitHub Page:** https://github.com/akjindal53244/Arithmo - **Developed by:** [Ashvini Kumar Jindal](https://www.linkedin.com/in/ashvini-jindal-26653262/) - **Funded by:** self-work - **Model type:** fine-tuned using QLoRA on Single GPU - **Language(s) (NLP):** English - **Finetuned from model:** mistralai/Mistral-7B-v0.1 ## Results Arithmo2-Mistral-7B is improved version of [Arithmo-Mistral-7B](https://huggingface.co/akjindal53244/Arithmo-Mistral-7B) model and is competitive with full fine-tuned state-of-the-art 7B Mathematical Reasoning models. Refer to [Comparing Arithmo models with other SFT LLM models](https://github.com/akjindal53244/Arithmo/tree/master?tab=readme-ov-file#comparing-arithmo-models-with-other-sft-llm-models) section for more details. <table> <thead> <tr> <th>Prompt Approach</th> <th>GSM8k</th> <th>MATH</th> </tr> </thead> <tbody> <tr> <td>Zero-Shot CoT</td> <td><b>76.4</b></td> <td><b>27.2</b></td> </tr> <tr> <td>Zero-Shot PoT</td> <td><b>74.2</b></td> <td>-</td> </tr> </tbody> </table> - **Zero-Shot CoT**: On providing a question as prompt, model generates reasoning steps to solve the question along with answer. We check if answer matches with ground-truth. - **Zero-Shot PoT**: We prompt the model to generate a Python program for the given question. During inference, we execute the Python program generated by the model and check if the program output matches with ground-truth answer. ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.0.dev0 ## Installation ``` pip install transformers >=4.34.0 pip install accelerate pip install sentencepiece pip install protobuf # If you are GPU poor like me pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu # If you have a GPU. pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu118 pip install scipy pip install bitsandbytes ``` ## How to query the model ``` # Set `run_model_on_gpu` to `False` if you are running on CPU. Model will generate reasoning steps with answer for your question. If you want to generate Python program, uncomment line-69 that adds a Python prompt. # This script automatically does formatting for you, so you just need to type question (eg: `What is 2+2?`) without any prefix like `Question:`, etc.** $ python query_model.py ``` **Note:** Above script automatically does formatting for you, so you just need to type question (eg: `What is 2+2?`) without any prefix like `Question:`, etc. Checkout [query_model.py](https://github.com/akjindal53244/Arithmo/blob/master/query_model.py) for more details. <br><br> ##### Sample Input: ``` Question: There are total 10 children. I have to give 1 apple to first child, 2 apples to second child, 3 apples to third child, and so on. How many apples do I need? ``` ##### Model Output: ``` Answer: The total number of apples needed is the sum of the first 10 positive integers. This can be calculated using the formula for the sum of an arithmetic series: \[S = \frac{n}{2}(a_1 + a_n),\] where $S$ is the sum, $n$ is the number of terms, $a_1$ is the first term, and $a_n$ is the last term. In this case, $n = 10$, $a_1 = 1$, and $a_n = 10$. Plugging these values into the formula, we get: \[S = \frac{10}{2}(1 + 10) = 5(11) = \boxed{55}.\] The answer is: 55 ``` Arithmo2-Mistral-7B is trained with same format as [Arithmo-Mistral-7B](https://huggingface.co/akjindal53244/Arithmo-Mistral-7B): #### CoT Format (generate reasoning steps with answer): ``` Question: <question> Answer: ``` #### PoT Format (generate a python program): ``` Question: <question> <python_prompt> Answer: ``` It will perform best if queried in this way with your own script. ## Comparing Arithmo models with other SFT LLM models Results for all models except `Arithmo2-Mistral-7B` are taken from [MetaMath](https://github.com/meta-math/MetaMath/blob/main/README.MD) repository. | Model | GSM8k Pass@1 | MATH Pass@1 | Fine-tuning | |---------------------|--------------|-------------|-------------| | MPT-7B | 6.8 | 3.0 | | Falcon-7B | 6.8 | 2.3 | | LLaMA-1-7B | 11.0 | 2.9 | | LLaMA-2-7B | 14.6 | 2.5 | | MPT-30B | 15.2 | 3.1 | | LLaMA-1-13B | 17.8 | 3.9 | | GPT-Neo-2.7B | 19.5 | -- | | Falcon-40B | 19.6 | 2.5 | | Baichuan-chat-13B | 23.9 | -- | | Vicuna-v1.3-13B | 27.6 | -- | | LLaMA-2-13B | 28.7 | 3.9 | | InternLM-7B | 31.2 | -- | | ChatGLM-2-6B | 32.4 | -- | | GPT-J-6B | 34.9 | -- | | LLaMA-1-33B | 35.6 | 3.9 | | LLaMA-2-34B | 42.2 | 6.24 | | RFT-7B | 50.3 | -- | | LLaMA-1-65B | 50.9 | 10.6 | | Qwen-7B | 51.6 | -- | | WizardMath-7B | 54.9 | 10.7 | | LLaMA-2-70B | 56.8 | 13.5 | | WizardMath-13B | 63.9 | 14.0 | | MetaMath-7B | 66.5 | 19.8 | | MetaMath-13B | 72.3 | 22.4 | | Arithmo-Mistral-7B (PoT) | 71.2 | -- | SFT: 4-bit QLoRA | | Arithmo2-Mistral-7B (PoT) | 74.2 | -- | SFT: 4-bit QLoRA | | MetaMath-Mistral-7B | 77.7 | 28.2 | SFT: Full fine-tuned | | Arithmo-Mistral-7B| 74.7 | 25.3 | SFT: 4-bit QLoRA | | 🔥 **Arithmo2-Mistral-7B** | **76.4** | **27.2** | **SFT: 4-bit QLoRA** | If you are interested in reproducing the resullts, visit https://github.com/akjindal53244/Arithmo#reproducing-results section. ### Support My Work Building LLMs takes time and resources; if you find my work interesting, your support would be epic! <a href="https://www.buymeacoffee.com/a_little_learner" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> ### Citation To cite Arithmo models: ``` @misc{jindal_2023_arithmo, author = {Jindal, Ashvini}, title = {Arithmo-Mistral-7B: Mathematical Reasoning Model}, howpublished = {Hugging Face}, month = {October}, year = {2023}, url = {https://huggingface.co/akjindal53244/Arithmo-Mistral-7B} } ``` <h2 id="References">References</h2> ``` @article{yu2023metamath, title={MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models}, author={Yu, Longhui and Jiang, Weisen and Shi, Han and Yu, Jincheng and Liu, Zhengying and Zhang, Yu and Kwok, James T and Li, Zhenguo and Weller, Adrian and Liu, Weiyang}, journal={arXiv preprint arXiv:2309.12284}, year={2023} } @article{Yue2023mammoth, title={MAmmoTH: Building math generalist models through hybrid instruction tuning}, author={Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen}, journal={arXiv preprint arXiv:2309.05653}, year={2023} } @article{mishra2022lila, title={Lila: A unified benchmark for mathematical reasoning}, author={Swaroop Mishra, Matthew Finlayson, Pan Lu, Leonard Tang, Sean Welleck, Chitta Baral, Tanmay Rajpurohit, Oyvind Tafjord, Ashish Sabharwal, Peter Clark, and Ashwin Kalyan}, journal={arXiv preprint arXiv:2210.17517}, year={2022} } ```
upaya07/Arithmo2-Mistral-7B
upaya07
2024-01-27T01:54:27Z
297
8
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "Mathematical Reasoning", "en", "dataset:akjindal53244/Arithmo-Data", "arxiv:2309.12284", "arxiv:2309.05653", "arxiv:2210.17517", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-14T07:11:42Z
--- license: mit language: - en datasets: - akjindal53244/Arithmo-Data tags: - Mathematical Reasoning --- **Arithmo2-Mistral-7B** model improves initially released [Arithmo-Mistral-7B](https://huggingface.co/akjindal53244/Arithmo-Mistral-7B) model on both GSM8K and MATH benchmarks. Specifically, there is **absolute** improvement of: - +1.7% on GSM8K - +3.0% on GSM8K PoT - +1.9% on MATH **This repo contains final merged model**. If you are interested in LoRA adapter, use [LoRA Adapter](https://huggingface.co/upaya07/Arithmo2-Mistral-7B-adapter) instead. ### Model Description - **Project GitHub Page:** https://github.com/akjindal53244/Arithmo - **Developed by:** [Ashvini Kumar Jindal](https://www.linkedin.com/in/ashvini-jindal-26653262/) - **Funded by:** self-work - **Model type:** fine-tuned using QLoRA on Single GPU - **Language(s) (NLP):** English - **Finetuned from model:** mistralai/Mistral-7B-v0.1 ## Results Arithmo2-Mistral-7B is improved version of [Arithmo-Mistral-7B](https://huggingface.co/akjindal53244/Arithmo-Mistral-7B) model and is competitive with full fine-tuned state-of-the-art 7B Mathematical Reasoning models. Refer to [Comparing Arithmo models with other SFT LLM models](https://github.com/akjindal53244/Arithmo/tree/master?tab=readme-ov-file#comparing-arithmo-models-with-other-sft-llm-models) section for more details. <table> <thead> <tr> <th>Prompt Approach</th> <th>GSM8k</th> <th>MATH</th> </tr> </thead> <tbody> <tr> <td>Zero-Shot CoT</td> <td><b>76.4</b></td> <td><b>27.2</b></td> </tr> <tr> <td>Zero-Shot PoT</td> <td><b>74.2</b></td> <td>-</td> </tr> </tbody> </table> - **Zero-Shot CoT**: On providing a question as prompt, model generates reasoning steps to solve the question along with answer. We check if answer matches with ground-truth. - **Zero-Shot PoT**: We prompt the model to generate a Python program for the given question. During inference, we execute the Python program generated by the model and check if the program output matches with ground-truth answer. ## Installation ``` pip install transformers >=4.34.0 pip install accelerate pip install sentencepiece pip install protobuf # If you are GPU poor like me pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu # If you have a GPU. pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu118 pip install scipy pip install bitsandbytes ``` ## How to query the model ``` # Set `run_model_on_gpu` to `False` if you are running on CPU. Model will generate reasoning steps with answer for your question. If you want to generate Python program, uncomment line-69 that adds a Python prompt. # This script automatically does formatting for you, so you just need to type question (eg: `What is 2+2?`) without any prefix like `Question:`, etc.** $ python query_model.py ``` **Note:** Above script automatically does formatting for you, so you just need to type question (eg: `What is 2+2?`) without any prefix like `Question:`, etc. Checkout [query_model.py](https://github.com/akjindal53244/Arithmo/blob/master/query_model.py) for more details. <br><br> ##### Sample Input: ``` Question: There are total 10 children. I have to give 1 apple to first child, 2 apples to second child, 3 apples to third child, and so on. How many apples do I need? ``` ##### Model Output: ``` Answer: The total number of apples needed is the sum of the first 10 positive integers. This can be calculated using the formula for the sum of an arithmetic series: \[S = \frac{n}{2}(a_1 + a_n),\] where $S$ is the sum, $n$ is the number of terms, $a_1$ is the first term, and $a_n$ is the last term. In this case, $n = 10$, $a_1 = 1$, and $a_n = 10$. Plugging these values into the formula, we get: \[S = \frac{10}{2}(1 + 10) = 5(11) = \boxed{55}.\] The answer is: 55 ``` Arithmo2-Mistral-7B is trained with same format as [Arithmo-Mistral-7B](https://huggingface.co/akjindal53244/Arithmo-Mistral-7B): #### CoT Format (generate reasoning steps with answer): ``` Question: <question> Answer: ``` #### PoT Format (generate a python program): ``` Question: <question> <python_prompt> Answer: ``` It will perform best if queried in this way with your own script. ## Comparing Arithmo models with other SFT LLM models Results for all models except `Arithmo2-Mistral-7B` are taken from [MetaMath](https://github.com/meta-math/MetaMath/blob/main/README.MD) repository. | Model | GSM8k Pass@1 | MATH Pass@1 | Fine-tuning | |---------------------|--------------|-------------|-------------| | MPT-7B | 6.8 | 3.0 | | Falcon-7B | 6.8 | 2.3 | | LLaMA-1-7B | 11.0 | 2.9 | | LLaMA-2-7B | 14.6 | 2.5 | | MPT-30B | 15.2 | 3.1 | | LLaMA-1-13B | 17.8 | 3.9 | | GPT-Neo-2.7B | 19.5 | -- | | Falcon-40B | 19.6 | 2.5 | | Baichuan-chat-13B | 23.9 | -- | | Vicuna-v1.3-13B | 27.6 | -- | | LLaMA-2-13B | 28.7 | 3.9 | | InternLM-7B | 31.2 | -- | | ChatGLM-2-6B | 32.4 | -- | | GPT-J-6B | 34.9 | -- | | LLaMA-1-33B | 35.6 | 3.9 | | LLaMA-2-34B | 42.2 | 6.24 | | RFT-7B | 50.3 | -- | | LLaMA-1-65B | 50.9 | 10.6 | | Qwen-7B | 51.6 | -- | | WizardMath-7B | 54.9 | 10.7 | | LLaMA-2-70B | 56.8 | 13.5 | | WizardMath-13B | 63.9 | 14.0 | | MetaMath-7B | 66.5 | 19.8 | | MetaMath-13B | 72.3 | 22.4 | | Arithmo-Mistral-7B (PoT) | 71.2 | -- | SFT: 4-bit QLoRA | | Arithmo2-Mistral-7B (PoT) | 74.2 | -- | SFT: 4-bit QLoRA | | MetaMath-Mistral-7B | 77.7 | 28.2 | SFT: Full fine-tuned | | Arithmo-Mistral-7B| 74.7 | 25.3 | SFT: 4-bit QLoRA | | 🔥 **Arithmo2-Mistral-7B** | **76.4** | **27.2** | **SFT: 4-bit QLoRA** | If you are interested in reproducing the results, visit https://github.com/akjindal53244/Arithmo#reproducing-results section. ### Support My Work Building LLMs takes time and resources; if you find my work interesting, your support would be epic! <a href="https://www.buymeacoffee.com/a_little_learner" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> ### Citation To cite Arithmo models: ``` @misc{jindal_2023_arithmo, author = {Jindal, Ashvini}, title = {Arithmo-Mistral-7B: Mathematical Reasoning Model}, howpublished = {Hugging Face}, month = {October}, year = {2023}, url = {https://huggingface.co/akjindal53244/Arithmo-Mistral-7B} } ``` <h2 id="References">References</h2> ``` @article{yu2023metamath, title={MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models}, author={Yu, Longhui and Jiang, Weisen and Shi, Han and Yu, Jincheng and Liu, Zhengying and Zhang, Yu and Kwok, James T and Li, Zhenguo and Weller, Adrian and Liu, Weiyang}, journal={arXiv preprint arXiv:2309.12284}, year={2023} } @article{Yue2023mammoth, title={MAmmoTH: Building math generalist models through hybrid instruction tuning}, author={Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen}, journal={arXiv preprint arXiv:2309.05653}, year={2023} } @article{mishra2022lila, title={Lila: A unified benchmark for mathematical reasoning}, author={Swaroop Mishra, Matthew Finlayson, Pan Lu, Leonard Tang, Sean Welleck, Chitta Baral, Tanmay Rajpurohit, Oyvind Tafjord, Ashish Sabharwal, Peter Clark, and Ashwin Kalyan}, journal={arXiv preprint arXiv:2210.17517}, year={2022} } ```
MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF
MaziyarPanahi
2024-01-27T01:45:00Z
54
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "OpenBuddy/openbuddy-zephyr-7b-v14.1", "pytorch", "zh", "en", "fr", "de", "ja", "ko", "it", "ru", "license:apache-2.0", "autotrain_compatible", "region:us", "endpoints_compatible", "base_model:MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1", "conversational" ]
text-generation
2024-01-27T01:36:15Z
--- license: apache-2.0 tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - Safetensors - text-generation-inference - merge - 7b - mistralai/Mistral-7B-Instruct-v0.1 - OpenBuddy/openbuddy-zephyr-7b-v14.1 - pytorch - zh - en - fr - de - ja - ko - it - ru - license:apache-2.0 - autotrain_compatible - region:us - endpoints_compatible model_name: openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1) ## Description [MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./openbuddy-zephyr-7b-v14.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
sqiangcao/sd-class-butterflies-64
sqiangcao
2024-01-27T01:43:38Z
45
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2024-01-27T01:42:39Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('sqiangcao/sd-class-butterflies-64') image = pipeline().images[0] image ```
macadeliccc/SOLAR-10.7b-Instruct-truthy-dpo-GGUF
macadeliccc
2024-01-27T01:33:18Z
70
3
null
[ "gguf", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2024-01-25T21:06:53Z
--- license: mit --- Benchmark Complete---- + 2024-01-26 10:18:23 + Time taken: 20.9 mins + Prompt Format: ChatML + Model: macadeliccc/SOLAR-10.7b-Instruct-truthy-dpo-GGUF + Score (v2): 73.46 + Parseable: 171.0 --------------- Batch completed Time taken: 21.0 mins
ntc-ai/SDXL-LoRA-slider.cinematic-lighting
ntc-ai
2024-01-27T01:28:52Z
76
9
diffusers
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
text-to-image
2024-01-27T01:28:49Z
--- language: - en thumbnail: "images/evaluate/cinematic lighting.../cinematic lighting_17_3.0.png" widget: - text: cinematic lighting output: url: images/cinematic lighting_17_3.0.png - text: cinematic lighting output: url: images/cinematic lighting_19_3.0.png - text: cinematic lighting output: url: images/cinematic lighting_20_3.0.png - text: cinematic lighting output: url: images/cinematic lighting_21_3.0.png - text: cinematic lighting output: url: images/cinematic lighting_22_3.0.png tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers license: "mit" inference: false instance_prompt: "cinematic lighting" base_model: "stabilityai/stable-diffusion-xl-base-1.0" --- # ntcai.xyz slider - cinematic lighting (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/cinematic lighting_17_-3.0.png" width=256 height=256 /> | <img src="images/cinematic lighting_17_0.0.png" width=256 height=256 /> | <img src="images/cinematic lighting_17_3.0.png" width=256 height=256 /> | | <img src="images/cinematic lighting_19_-3.0.png" width=256 height=256 /> | <img src="images/cinematic lighting_19_0.0.png" width=256 height=256 /> | <img src="images/cinematic lighting_19_3.0.png" width=256 height=256 /> | | <img src="images/cinematic lighting_20_-3.0.png" width=256 height=256 /> | <img src="images/cinematic lighting_20_0.0.png" width=256 height=256 /> | <img src="images/cinematic lighting_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` cinematic lighting ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.cinematic-lighting', weight_name='cinematic lighting.safetensors', adapter_name="cinematic lighting") # Activate the LoRA pipe.set_adapters(["cinematic lighting"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, cinematic lighting" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs